Guide
Which regulator covers your AI system
Decision-tree reference guide mapping AI use cases to the UK regulators responsible for oversight. Covers the ICO, FCA, Ofcom, CMA, MHRA, HSE, and EHRC, with scenario-based guidance on which regulators apply to your AI system and the penalties each can impose.
The UK does not have a single AI regulator. Instead, your AI system may fall under the remit of several regulators at the same time, depending on what it does, whose data it processes, and which sector it operates in.
This guide helps you identify which regulators apply to your AI use case and understand the enforcement powers each holds. If you are developing, deploying, or procuring an AI system, you should work through each scenario below to build a complete picture of your regulatory obligations.
AI use case to regulator mapping
Quick decision guide
Use these scenarios to identify which regulators are most likely to apply to your AI system. Most systems will trigger more than one.
Your AI system processes personal data
Regulator: ICO. If your AI system uses personal data for any purpose — training, inference, profiling, or automated decision-making — the ICO oversees your compliance with UK GDPR and the Data Protection Act 2018. This applies regardless of sector. You must have a lawful basis for processing, conduct a Data Protection Impact Assessment for high-risk processing, and provide transparency about how the system uses personal data.
Your AI system operates in financial services
Regulator: FCA (and PRA for prudential matters). AI used in credit decisions, algorithmic trading, insurance pricing, fraud detection, or customer communications falls under FCA oversight. The FCA expects firms to explain AI-driven decisions to customers and to demonstrate that algorithms do not produce unfair outcomes. The Senior Managers and Certification Regime means named individuals are accountable for AI governance.
Your AI system is a medical device or assists clinical decisions
Regulator: MHRA. AI software that diagnoses, monitors, or recommends treatment may be classified as a medical device under the Medical Devices Regulations 2002. This requires UKCA marking, conformity assessment, and post-market surveillance. The classification depends on the intended purpose and the level of clinical risk.
Your AI system operates on an online platform
Regulator: Ofcom. Under the Online Safety Act 2023, platforms using AI for content recommendation, moderation, or age assurance must comply with Ofcom's codes of practice. This includes transparency about how algorithmic systems curate content and what safeguards protect children.
Your AI system affects workplace safety
Regulator: HSE. AI controlling industrial machinery, autonomous vehicles in warehouses, or robotic systems in manufacturing falls under health and safety legislation. The employer's general duty under the Health and Safety at Work etc. Act 1974 applies to AI-related risks. You must risk-assess AI systems that interact with workers or the public.
Your AI system makes recruitment or employment decisions
Regulator: EHRC. AI used in CV screening, interview scoring, performance assessment, or redundancy selection must comply with the Equality Act 2010. The EHRC can investigate and take enforcement action where AI systems produce discriminatory outcomes, whether or not discrimination was intended. Indirect discrimination through biased training data is a particular risk.
Your AI system affects competition or consumer choice
Regulator: CMA. AI used in pricing algorithms, personalised offers, or market analysis may raise competition concerns. The CMA has published guidance on algorithmic collusion and is actively monitoring AI-driven pricing. The Digital Markets, Competition and Consumers Act 2024 gives the CMA additional powers over digital markets.
Penalties for non-compliance
When multiple regulators apply
For most AI systems of any complexity, two or more regulators will have concurrent jurisdiction. The Digital Regulation Cooperation Forum (DRCF) — comprising the ICO, Ofcom, CMA, and FCA — coordinates to avoid conflicting requirements and reduce duplication.
In practice, this means:
- No single point of contact: You may need to engage with each regulator separately
- Different compliance standards: Each regulator applies its own framework, not a unified AI standard
- Cumulative penalties: Enforcement action by one regulator does not prevent action by another for the same AI system
- Regulatory sandboxes: The FCA and ICO both offer sandbox or advisory services for novel AI applications
Practical step: Create a regulatory map for each AI system you develop or deploy. List every regulator whose remit touches your use case, the specific obligations that apply, and the named individual within your organisation who is accountable for compliance with each. Review this map whenever the system's functionality or data inputs change.