Guide
Using AI in recruitment and HR
Compliance requirements when using AI for recruitment, screening, and HR decisions. Covers equality law risks, data protection obligations, bias testing, automated decision-making safeguards, and practical steps for lawful deployment.
Check if your AI recruitment tools follow equality and data protection laws. Test for bias, keep clear records, and tell candidates how you use their data. You are responsible for any discrimination the tool causes.
- Test AI tools for bias against protected groups
- Get consent before processing special category data
- Do an impact assessment for high-risk AI systems
- Explain automated decisions to job applicants
- Keep records of testing and decisions
- Public sector must do Equality Impact Assessments
- Penalties for discrimination up to £60,000
- ICO can fine for data protection breaches
- Northern Ireland has different equality laws
Artificial intelligence tools for recruitment and HR are now widely available. CV screening software, candidate ranking algorithms, video interview analysis, psychometric profiling, and automated shortlisting are marketed as ways to reduce hiring time and improve consistency. However, these tools carry significant legal risk.
Unlike a human recruiter, an AI system can discriminate at scale without anyone noticing. A model trained on historical hiring data may learn to penalise candidates from particular demographic groups, not because it was programmed to do so, but because the patterns in the training data reflect past biases. The legal consequences fall on the employer, not the software vendor.
This guide explains the compliance requirements that apply when you use AI in recruitment, people management, performance assessment, or any HR decision-making process. The obligations come from three overlapping areas of law: equality legislation, data protection, and the new automated decision-making rules under the Data (Use and Access) Act 2025.
Who this guide is for
This guide applies to any UK employer that uses AI or algorithmic tools in HR processes, including:
- Recruitment: CV screening, candidate matching, video interview analysis, chatbot-based screening, automated shortlisting
- People management: Performance scoring, promotion recommendations, redundancy selection
- Workforce planning: Shift allocation, productivity monitoring, absence pattern analysis
- Compensation: Pay benchmarking tools, bonus allocation algorithms
It applies regardless of whether you built the AI tool yourself or purchased it from a third-party vendor. Under UK law, the employer is the data controller and bears responsibility for how the tool processes personal data and affects individuals.
Equality law risks
The Equality Act 2010 prohibits both direct and indirect discrimination on the basis of nine protected characteristics: age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, and sexual orientation. AI recruitment tools can breach these protections even when no human intended to discriminate.
Recruitment-specific discrimination risks
AI tools can discriminate through proxy characteristics. A model does not need to know a candidate's race or gender to discriminate against them. Postcode data can serve as a proxy for ethnicity. Name patterns can correlate with national origin. Employment gaps may disproportionately affect women who took maternity leave. University attended can correlate with socioeconomic background and, indirectly, with race.
Specific risks in AI recruitment include:
- Training data bias: If your historical hires were predominantly male, the model may learn to favour male candidates
- Video analysis: Facial analysis and speech pattern tools have documented accuracy disparities across demographic groups
- Keyword filtering: Automated CV screening that penalises career breaks disproportionately affects women and disabled people
- Language models: Natural language processing can embed cultural and gender biases from training corpora
Employers cannot defend an indirect discrimination claim simply by saying the AI made the decision. Section 19 of the Equality Act applies to any provision, criterion, or practice that puts people sharing a protected characteristic at a particular disadvantage. An algorithm is a practice.
Data protection requirements
Using AI in recruitment involves processing personal data, often including special category data such as health information, ethnicity, or disability status. UK GDPR requires a lawful basis for this processing, and the ICO expects organisations to meet heightened transparency standards when AI is involved.
Conduct a Data Protection Impact Assessment
A DPIA is mandatory under UK GDPR Article 35 when processing is likely to result in a high risk to individuals. AI-based recruitment decisions almost always meet this threshold because they involve systematic evaluation of personal aspects, automated decision-making with legal or significant effects, and processing at scale.
Bias testing
There is no single UK statute that mandates bias testing for AI systems. However, the combined effect of the Equality Act, UK GDPR, and ICO guidance means that failing to test for bias creates substantial legal exposure. If an AI tool produces discriminatory outcomes and you did not test for this, it becomes very difficult to defend a discrimination claim or satisfy the ICO that you have met the accountability principle.
Automated decision-making safeguards
The Data (Use and Access) Act 2025 reformed the rules on solely automated decisions that produce legal or similarly significant effects. Recruitment decisions, whether to shortlist, interview, or hire a candidate, clearly fall within scope. From 5 February 2026, these decisions are permitted on any lawful basis, but robust safeguards must be maintained.
Practical steps for compliance
The following actions will help you use AI recruitment tools lawfully. They are not optional good practice; each addresses a specific legal obligation under the Equality Act, UK GDPR, or the DUAA 2025.
-
1. Audit your AI recruitment tools
Map every AI or algorithmic tool used in your hiring and HR processes. For each tool, document what data it processes, what decisions it influences, who the vendor is, and what training data was used. If the vendor cannot tell you what data the model was trained on, this is a red flag.
-
2. Conduct a DPIA before deployment
Complete a Data Protection Impact Assessment for each AI recruitment tool before you start using it. Assess the risks to candidates' rights, identify mitigation measures, and document your analysis. Review the DPIA at least annually or whenever the tool is updated.
-
3. Test for bias across protected characteristics
Analyse the tool's outputs for disparate impact across gender, ethnicity, age, disability, and other protected characteristics. Use statistical methods such as adverse impact ratios. If you find disparities, investigate the cause and take corrective action before continuing to use the tool.
-
4. Inform candidates about AI involvement
Tell candidates in clear, plain language that AI tools will be used in the recruitment process. Explain what the tool does, what data it processes, and how it influences decisions. Include this information in your privacy notice and in the job application process itself.
-
5. Enable meaningful human review
Ensure that no candidate is rejected solely by an automated system without a human reviewer having the opportunity to consider the decision. The human reviewer must have genuine authority to override the AI recommendation and sufficient information to do so meaningfully.
-
6. Keep records and maintain audit trails
Document the AI tool's recommendations, the human decisions made, any overrides, and the reasons for them. Retain records for at least the duration of any potential discrimination claim (typically six months from the date of the decision, but longer if proceedings are initiated). Records also support ICO accountability requirements.