Manufacturing & EngineeringRetail & Consumer GoodsTechnology & Digital UK-wide

The UK does not have a single AI law. Unlike the EU, which passed the AI Act as a comprehensive horizontal regulation, the UK has chosen a principles-based, sector-specific approach. Existing regulators — the ICO, FCA, Ofcom, CMA, MHRA, HSE, and others — apply their existing powers to AI systems within their domains.

This means there is no single regulator you register with and no universal risk classification system. Instead, the regulatory requirements that apply to your AI system depend on what it does, who it affects, and which sector it operates in.

The government set out this approach in its 2023 white paper A pro-innovation approach to AI regulation and reinforced it in the 2024 response to consultation. The AI Bill announced in the July 2025 King's Speech is expected to place the five principles on a statutory footing, but the sector-specific model remains the foundation.

The five AI principles

These five principles are currently non-statutory. Regulators are expected to interpret and apply them within their existing frameworks, adapting them to the specific risks and contexts of their sectors. The ICO, for example, maps the principles to UK GDPR requirements for automated decision-making. The FCA applies them through its existing rules on algorithmic trading and consumer outcomes.

The government has indicated that the forthcoming AI Bill will place a duty on regulators to have regard to these principles, giving them a firmer legal basis without creating a rigid compliance regime.

Which regulators cover AI

The sector-based model means that multiple regulators may apply simultaneously to a single AI system. An AI-powered recruitment tool, for instance, falls under ICO oversight for data protection, EHRC scrutiny for discrimination, and potentially Ofcom regulation if it operates on an online platform.

Businesses developing or deploying AI should identify all the regulators whose remit covers their use case. The Digital Regulation Cooperation Forum (DRCF) coordinates between regulators to reduce duplication and provide joined-up guidance.

Key institutions

The AI Security Institute focuses primarily on frontier AI models — the most powerful systems developed by major AI laboratories. Most businesses deploying AI will not interact directly with the Institute unless they are developing or fine-tuning foundation models. The DRCF, by contrast, produces practical guidance relevant to any business using AI within a regulated sector.

What is coming next

The legislative landscape is evolving rapidly. Key developments to monitor include:

  • AI Bill: Expected to place the five principles on a statutory footing and give regulators clearer mandates
  • ICO AI code of practice: The ICO is developing a statutory code under the Data (Use and Access) Act 2025 covering AI and automated decision-making
  • Copyright and AI: The government is considering how to balance AI training needs with creators' rights, following its 2024 consultation
  • International alignment: The UK is participating in the Hiroshima AI Process and the Council of Europe AI Convention, which may influence domestic regulation

Businesses should not wait for legislation to act. The regulators already have enforcement powers that apply to AI systems, and they are actively using them. Building compliance into your AI processes now will reduce the cost of adapting when statutory requirements arrive.