Technology & Digital UK-wide

The UK has adopted a pro-innovation, principles-based approach to AI regulation that differs fundamentally from the EU's risk-classification model. Rather than creating a single AI-specific law or a dedicated AI regulator, the UK empowers existing sector regulators to interpret and apply five cross-cutting principles within their own regulatory frameworks.

This means that the rules applying to your AI system depend on what it does and which sector it operates in, not on an abstract risk category. An AI system used for credit decisions is regulated by the FCA under existing financial services rules. An AI medical device is regulated by the MHRA under medical device legislation. An AI recruitment tool must comply with the Equality Act 2010 as overseen by the EHRC.

While this approach offers flexibility, it also creates complexity. Multiple regulatory frameworks can apply to a single AI system simultaneously. A customer-facing AI chatbot in financial services, for example, might need to comply with FCA Consumer Duty, UK GDPR automated decision-making rules, consumer protection regulations, and the Online Safety Act 2023 — each enforced by a different regulator with different penalty frameworks.

Which regulators oversee AI in your sector

The UK's decentralised approach means you must identify every regulator with jurisdiction over your AI use case. The main regulators actively developing AI-specific guidance are:

  • ICO (Information Commissioner's Office): Oversees all AI processing of personal data under UK GDPR and DPA 2018. Enforces automated decision-making rules (reformed by DUAA 2025), transparency obligations, and Data Protection Impact Assessment requirements. Developing a statutory AI code of practice. Maximum penalty: £17.5 million or 4% of global turnover.
  • FCA (Financial Conduct Authority): Regulates AI in financial services through Consumer Duty, Senior Managers and Certification Regime (SM&CR), and existing conduct rules. Covers algorithmic trading, AI-driven credit decisions, robo-advice, and automated underwriting. Maximum penalty: unlimited.
  • MHRA (Medicines and Healthcare products Regulatory Agency): Regulates AI as a Medical Device (AIaMD) and Software as a Medical Device (SaMD) under UK medical device regulations. Requires conformity assessment and UKCA marking. Maximum penalty: unlimited fine and/or imprisonment.
  • CMA (Competition and Markets Authority): Examines AI's impact on competition and consumer protection. Published AI foundation model principles (September 2023). Enforces through consumer protection law and the Digital Markets, Competition and Consumers Act 2024. Maximum penalty: 10% of global turnover.
  • Ofcom: Regulates AI content recommendation systems and algorithmic transparency under the Online Safety Act 2023. Covers platforms using AI to moderate, recommend, or generate content. Maximum penalty: 10% of qualifying worldwide revenue or £18 million.
  • EHRC (Equality and Human Rights Commission): Monitors AI bias in recruitment, facial recognition, and algorithmic decision-making under the Equality Act 2010. Can investigate and take enforcement action against discriminatory AI outcomes.
  • HSE (Health and Safety Executive): Oversees AI-controlled machinery and automated workplace systems under HSWA 1974 and relevant regulations. Maximum penalty: unlimited fine and/or imprisonment.

The Digital Regulation Cooperation Forum (DRCF) — comprising the ICO, FCA, CMA, and Ofcom — coordinates cross-regulator approaches to AI to reduce conflicting requirements. Their 2025/26 work plan focuses on resolving points of regulatory conflict.

Key legal obligations applying to AI now

Although there is no single AI law, several existing statutes impose concrete obligations on businesses deploying AI:

Automated decision-making (UK GDPR / DUAA 2025)

From 5 February 2026, the Data (Use and Access) Act 2025 reformed the rules on solely automated decisions that produce legal or similarly significant effects. Significant automated decisions are now permitted on any lawful basis (including legitimate interests), but you must provide three safeguards:

  • Right to obtain human intervention
  • Right to express their point of view
  • Right to contest the decision

Automated decisions using special category data (health, ethnicity, religion, etc.) remain restricted to explicit consent or substantial public interest.

Equality and non-discrimination

The Equality Act 2010 prohibits discriminatory outcomes from AI systems, whether the discrimination is direct or indirect. If your AI produces outputs that disproportionately disadvantage people with a protected characteristic — even without discriminatory intent — this can constitute unlawful indirect discrimination. Key risk areas include AI in recruitment (CV screening, candidate ranking), pricing and underwriting, service access decisions, and HR performance management.

Transparency obligations

Multiple frameworks require transparency about AI use. UK GDPR Articles 13-14 require 'meaningful information about the logic involved' in automated decisions. The FCA Consumer Duty requires firms to explain AI-driven decisions to consumers. The ICO's guidance on 'Explaining decisions made with AI' identifies six explanation types: rationale, responsibility, data, fairness, safety/performance, and impact.

Current UK approach
Pro-innovation, principles-based, sector-regulated (no single AI law)
Government AI Bill
Announced for second half of 2026 at earliest. Expected to cover AI safety, copyright, and making voluntary developer commitments legally binding
AI Security Institute (formerly AISI)
Renamed from AI Safety Institute in February 2025. Directorate of DSIT. Researches frontier AI governance and conducts model evaluations
DRCF AI coordination
ICO, FCA, CMA, and Ofcom coordinate through the Digital Regulation Cooperation Forum to align AI approaches
EU AI Act exposure
UK businesses serving the EU market must comply with EU AI Act obligations on a staggered timeline (prohibited practices from Feb 2025, high-risk from Aug 2026)
DUAA 2025 automated decisions
From 5 February 2026, significant automated decisions permitted on any lawful basis with safeguards (human intervention, express views, contest)
DPIA requirement
Mandatory Data Protection Impact Assessment for AI processing likely to result in high risk to individuals' rights and freedoms

ℹ️ AI Bill expected in second half of 2026

  1. 1. Map your AI systems and use cases

    Create an inventory of all AI systems your business uses or develops. For each, record its purpose, the data it processes, the decisions it influences, and who is affected by its outputs. This inventory is the foundation for all subsequent compliance steps.

  2. 2. Identify which regulators have jurisdiction

    For each AI system, determine which sector regulators oversee your use case. A single AI system may fall under multiple regulators — for example, an AI recruitment tool engages the ICO (data protection), EHRC (equality), and potentially the FCA if used in financial services hiring.

  3. 3. Assess against the five AI principles

    Review each AI system against the five cross-cutting principles (safety/security/robustness, transparency/explainability, fairness, accountability/governance, contestability/redress). Document how you meet each principle and identify gaps.

  4. 4. Conduct Data Protection Impact Assessments

    For any AI system processing personal data with potential high risk, complete a DPIA before deployment. Use the ICO's AI and data protection risk toolkit. DPIAs are mandatory under UK GDPR Article 35 for profiling, large-scale processing, and automated decision-making.

  5. 5. Implement automated decision-making safeguards

    For AI systems making decisions with legal or similarly significant effects, implement the three DUAA 2025 safeguards — right to human intervention, right to express views, and right to contest. Document the safeguards and make them accessible to affected individuals.

  6. 6. Test for bias and discriminatory outcomes

    Test AI systems for bias across protected characteristics before deployment and on an ongoing basis. Implement fairness metrics appropriate to your use case. For AI in recruitment, follow the GOV.UK Responsible AI in Recruitment guide and consider Algorithmic Impact Assessments.

  7. 7. Document and maintain transparency

    Update privacy notices to describe AI use, including the logic involved in automated decisions and their significance. Maintain Records of Processing Activities (ROPA) documenting automated decision-making processes. Ensure affected individuals can understand how AI decisions are made.

  8. 8. Establish AI governance structures

    Assign clear accountability for AI systems — ideally a named senior individual or committee. Create policies covering AI procurement, development, testing, deployment, monitoring, and retirement. Ensure governance structures can respond to regulator inquiries across all relevant sectors.

  9. 9. Monitor for the EU AI Act if serving EU markets

    If your business operates in or sells AI products into the EU single market, assess your exposure to the EU AI Act. Prohibited AI practices have applied since February 2025. High-risk AI system obligations take effect from August 2026. Compliance with UK principles does not automatically satisfy EU requirements.