Retail & Consumer GoodsHealthcare & Social CareTechnology & Digital UK-wide

AI systems can discriminate even when no human intended them to. This is not a theoretical risk; it is a documented pattern across industries and jurisdictions. In the UK, the Equality Act 2010 makes employers and service providers liable for discriminatory outcomes regardless of whether discrimination was intentional. An algorithm that produces discriminatory results is treated the same way as a human decision-maker who discriminates.

This guide explains how equality law applies to AI, what indirect discrimination through proxy characteristics means in practice, and what steps businesses should take to identify and prevent algorithmic bias.

Why AI discriminates

AI systems learn patterns from historical data. If that data reflects existing societal inequalities, the model will learn and reproduce those inequalities. This is not a flaw in any specific AI product; it is a structural feature of how machine learning works.

Common causes of AI bias include:

  • Biased training data: Historical data reflecting past discrimination (e.g., hiring records from a period when an industry was predominantly male)
  • Proxy variables: Features that correlate with protected characteristics without directly encoding them (e.g., postcode correlating with ethnicity)
  • Measurement bias: Using metrics that systematically disadvantage certain groups (e.g., defining 'success' based on outcomes that were themselves shaped by discrimination)
  • Selection bias: Training data that does not represent the full population the model will be applied to
  • Feedback loops: AI outputs that reinforce the patterns they were trained on, amplifying existing disparities over time

Equality Act obligations

The Equality Act 2010 prohibits direct discrimination (treating someone less favourably because of a protected characteristic) and indirect discrimination (applying a provision, criterion, or practice that disproportionately disadvantages people sharing a protected characteristic). AI-driven decisions engage both prohibitions, but indirect discrimination is the more common risk.

Testing for bias

There is currently no UK statute that mandates bias testing for AI systems. However, the practical effect of the Equality Act, combined with the ICO's expectations under UK GDPR, means that businesses deploying AI should test for bias as a matter of routine compliance. The question is not whether you should test, but how.

Practical examples of AI bias

Understanding how AI bias manifests in different business contexts helps illustrate why testing and governance are essential.

Recruitment tools

A CV screening tool trained on a company's historical hiring data learned to downrank candidates who attended women's colleges, because the company's past hires were predominantly male. The tool did not use gender as an input, but college name served as a proxy. This is classic indirect discrimination under section 19 of the Equality Act: a provision (the algorithm's scoring criteria) that puts women at a particular disadvantage compared to men.

Pricing algorithms

Insurance pricing algorithms that use postcode as a rating factor can produce outcomes that correlate with ethnicity, creating potential indirect discrimination under the Equality Act. While the use of actuarially justified factors is permitted, the pricing algorithm must not produce unjustified disparate impact. The FCA has investigated instances where pricing models charged higher premiums to customers in areas with higher ethnic minority populations, even after controlling for risk.

Credit scoring

AI credit scoring models can disadvantage groups with less conventional financial histories. Applicants who use informal savings methods, have employment gaps due to caring responsibilities, or lack a traditional credit footprint may receive lower scores. If this disproportionately affects people sharing a protected characteristic (for example, women who took maternity leave, or ethnic minority communities with different banking traditions), it can constitute indirect discrimination.

Customer service chatbots

Natural language processing models can perform differently for users who speak English as a second language, use non-standard dialects, or have speech disabilities. If a chatbot systematically fails to understand or properly serve customers from particular demographic groups, the service provider may face a discrimination claim under the Equality Act's provisions on the provision of services.

The proportionate means defence

Indirect discrimination can be justified if the provision, criterion, or practice is a proportionate means of achieving a legitimate aim (section 19(2)(d) of the Equality Act). For AI systems, this means:

  • The business must have a legitimate aim for using the AI tool (e.g., efficient candidate screening, accurate risk assessment)
  • The AI tool must be an appropriate way to achieve that aim
  • The discriminatory impact must be proportionate to the benefit achieved
  • Less discriminatory alternatives must have been considered

This defence is fact-specific and places the burden of proof on the respondent. Simply asserting that the AI tool is more efficient is unlikely to be sufficient. You would need to demonstrate that you considered the discriminatory impact, explored alternatives, and concluded that the approach was proportionate.

Who enforces equality law on AI?

Multiple bodies have a role in overseeing AI and equality:

  • Equality and Human Rights Commission (EHRC): The primary enforcement body for the Equality Act. Has published guidance on AI and equality and can take enforcement action against organisations whose AI systems discriminate
  • Information Commissioner's Office (ICO): Enforces UK GDPR, including the fairness principle and automated decision-making provisions. The ICO and EHRC have a memorandum of understanding on AI oversight
  • Sector regulators: The FCA (financial services), Ofcom (communications), and CMA (competition) all have roles in their respective sectors
  • Employment tribunals: Individual claimants can bring discrimination claims where AI has been used in employment decisions
  • County courts: Discrimination claims in the provision of goods and services are heard in the county courts