Retail & Consumer GoodsHealthcare & Social CareTechnology & Digital UK-wide

Artificial intelligence tools for recruitment and HR are now widely available. CV screening software, candidate ranking algorithms, video interview analysis, psychometric profiling, and automated shortlisting are marketed as ways to reduce hiring time and improve consistency. However, these tools carry significant legal risk.

Unlike a human recruiter, an AI system can discriminate at scale without anyone noticing. A model trained on historical hiring data may learn to penalise candidates from particular demographic groups, not because it was programmed to do so, but because the patterns in the training data reflect past biases. The legal consequences fall on the employer, not the software vendor.

This guide explains the compliance requirements that apply when you use AI in recruitment, people management, performance assessment, or any HR decision-making process. The obligations come from three overlapping areas of law: equality legislation, data protection, and the new automated decision-making rules under the Data (Use and Access) Act 2025.

Who this guide is for

This guide applies to any UK employer that uses AI or algorithmic tools in HR processes, including:

  • Recruitment: CV screening, candidate matching, video interview analysis, chatbot-based screening, automated shortlisting
  • People management: Performance scoring, promotion recommendations, redundancy selection
  • Workforce planning: Shift allocation, productivity monitoring, absence pattern analysis
  • Compensation: Pay benchmarking tools, bonus allocation algorithms

It applies regardless of whether you built the AI tool yourself or purchased it from a third-party vendor. Under UK law, the employer is the data controller and bears responsibility for how the tool processes personal data and affects individuals.

Equality law risks

The Equality Act 2010 prohibits both direct and indirect discrimination on the basis of nine protected characteristics: age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, and sexual orientation. AI recruitment tools can breach these protections even when no human intended to discriminate.

Recruitment-specific discrimination risks

AI tools can discriminate through proxy characteristics. A model does not need to know a candidate's race or gender to discriminate against them. Postcode data can serve as a proxy for ethnicity. Name patterns can correlate with national origin. Employment gaps may disproportionately affect women who took maternity leave. University attended can correlate with socioeconomic background and, indirectly, with race.

Specific risks in AI recruitment include:

  • Training data bias: If your historical hires were predominantly male, the model may learn to favour male candidates
  • Video analysis: Facial analysis and speech pattern tools have documented accuracy disparities across demographic groups
  • Keyword filtering: Automated CV screening that penalises career breaks disproportionately affects women and disabled people
  • Language models: Natural language processing can embed cultural and gender biases from training corpora

Employers cannot defend an indirect discrimination claim simply by saying the AI made the decision. Section 19 of the Equality Act applies to any provision, criterion, or practice that puts people sharing a protected characteristic at a particular disadvantage. An algorithm is a practice.

Data protection requirements

Using AI in recruitment involves processing personal data, often including special category data such as health information, ethnicity, or disability status. UK GDPR requires a lawful basis for this processing, and the ICO expects organisations to meet heightened transparency standards when AI is involved.

Conduct a Data Protection Impact Assessment

A DPIA is mandatory under UK GDPR Article 35 when processing is likely to result in a high risk to individuals. AI-based recruitment decisions almost always meet this threshold because they involve systematic evaluation of personal aspects, automated decision-making with legal or significant effects, and processing at scale.

Bias testing

There is no single UK statute that mandates bias testing for AI systems. However, the combined effect of the Equality Act, UK GDPR, and ICO guidance means that failing to test for bias creates substantial legal exposure. If an AI tool produces discriminatory outcomes and you did not test for this, it becomes very difficult to defend a discrimination claim or satisfy the ICO that you have met the accountability principle.

Automated decision-making safeguards

The Data (Use and Access) Act 2025 reformed the rules on solely automated decisions that produce legal or similarly significant effects. Recruitment decisions, whether to shortlist, interview, or hire a candidate, clearly fall within scope. From 5 February 2026, these decisions are permitted on any lawful basis, but robust safeguards must be maintained.

Practical steps for compliance

The following actions will help you use AI recruitment tools lawfully. They are not optional good practice; each addresses a specific legal obligation under the Equality Act, UK GDPR, or the DUAA 2025.