Testing mode: Set your business context to see how faceted callouts appear.
Changes are saved to your session and the page will reload to show relevant content.
Comprehensive overview of UK AI regulation. The UK has no single AI law. Instead, existing sector regulators apply five cross-cutting principles to AI systems within their remit. This guide explains how the framework works, which regulators are involved, and what is coming next.
The UK does not have a single AI law. Unlike the EU, which passed the AI Act as a comprehensive horizontal regulation, the UK has chosen a principles-based, sector-specific approach. Existing regulators — the ICO, FCA, Ofcom, CMA, MHRA, HSE, and others — apply their existing powers to AI systems within their domains.
This means there is no single regulator you register with and no universal risk classification system. Instead, the regulatory requirements that apply to your AI system depend on what it does, who it affects, and which sector it operates in.
The government set out this approach in its 2023 white paper A pro-innovation approach to AI regulation and reinforced it in the 2024 response to consultation. The AI Bill announced in the July 2025 King's Speech is expected to place the five principles on a statutory footing, but the sector-specific model remains the foundation.
The five AI principles
These five principles are currently non-statutory. Regulators are expected to interpret and apply them within their existing frameworks, adapting them to the specific risks and contexts of their sectors. The ICO, for example, maps the principles to UK GDPR requirements for automated decision-making. The FCA applies them through its existing rules on algorithmic trading and consumer outcomes.
The government has indicated that the forthcoming AI Bill will place a duty on regulators to have regard to these principles, giving them a firmer legal basis without creating a rigid compliance regime.
Which regulators cover AI
The sector-based model means that multiple regulators may apply simultaneously to a single AI system. An AI-powered recruitment tool, for instance, falls under ICO oversight for data protection, EHRC scrutiny for discrimination, and potentially Ofcom regulation if it operates on an online platform.
Businesses developing or deploying AI should identify all the regulators whose remit covers their use case. The Digital Regulation Cooperation Forum (DRCF) coordinates between regulators to reduce duplication and provide joined-up guidance.
Key institutions
The AI Security Institute focuses primarily on frontier AI models — the most powerful systems developed by major AI laboratories. Most businesses deploying AI will not interact directly with the Institute unless they are developing or fine-tuning foundation models. The DRCF, by contrast, produces practical guidance relevant to any business using AI within a regulated sector.
What is coming next
The legislative landscape is evolving rapidly. Key developments to monitor include:
AI Bill: Expected to place the five principles on a statutory footing and give regulators clearer mandates
ICO AI code of practice: The ICO is developing a statutory code under the Data (Use and Access) Act 2025 covering AI and automated decision-making
Copyright and AI: The government is considering how to balance AI training needs with creators' rights, following its 2024 consultation
International alignment: The UK is participating in the Hiroshima AI Process and the Council of Europe AI Convention, which may influence domestic regulation
Businesses should not wait for legislation to act. The regulators already have enforcement powers that apply to AI systems, and they are actively using them. Building compliance into your AI processes now will reduce the cost of adapting when statutory requirements arrive.
The UK takes a principles-based, sector-specific approach to AI regulation. There is no single AI law. Instead, existing regulators — including the ICO, FCA, MHRA, CMA, Ofcom, and EHRC — apply five cross-cutting principles within their own domains. The AI Security Institute (formerly AI Safety Institute) provides guidance on frontier models. A comprehensive government AI Bill is expected in the second half of 2026.
Decision-tree reference guide mapping AI use cases to the UK regulators responsible for oversight. Covers the ICO, FCA, Ofcom, CMA, MHRA, HSE, and EHRC, with scenario-based guidance on which regulators apply to your AI system and the penalties each can impose.
Quick reference for all key AI regulation dates and upcoming milestones. Covers the EU AI Act implementation timeline, UK regulatory developments, copyright consultations, and penalty commencement dates that affect businesses operating in or trading with the UK and EU.
Step-by-step guide to assessing what AI compliance obligations apply to your business. Covers inventorying AI systems, identifying personal data processing, mapping to regulators, conducting DPIAs, checking equality impacts, and documenting governance arrangements.
How to establish accountability structures, risk processes, and oversight for AI systems in your business. Covers accountability and roles, transparency, fairness and bias testing, record-keeping, and applying the UK's five AI regulatory principles.
Type to search across guides, journeys, regulators, legislation, and topics.
Compliance Assistant
Ask about UK business regulations
I can help you navigate UK business regulations using our knowledge graph of official government guidance. My answers aren't legal advice — I'll link you to the authoritative sources so you can verify anything that matters.
Ask me about compliance obligations, tax thresholds, sector regulations, or anything else about running a business in the UK.