Guide
Assess your AI compliance obligations
Step-by-step guide to assessing what AI compliance obligations apply to your business. Covers inventorying AI systems, identifying personal data processing, mapping to regulators, conducting DPIAs, checking equality impacts, and documenting governance arrangements.
Check if your AI systems follow UK rules. List your AI tools, see if they use personal data, find which regulators apply, check for discrimination risks, and keep records. Different regulators cover different AI uses like data protection, equality, or safety.
- List all AI systems your business uses
- Check if AI processes personal data (UK GDPR applies)
- Find which regulators oversee your AI (ICO, EHRC, HSE etc)
- Do a Data Protection Impact Assessment (DPIA) for high-risk AI
- Test AI for bias against protected characteristics
- Document who manages AI and how decisions are made
- ICO regulates AI using personal data
- EHRC covers AI equality impacts
- HSE oversees safety-critical AI
- Keep records of AI governance arrangements
If your business uses artificial intelligence — whether a chatbot handling customer enquiries, an algorithm screening job applicants, or a machine learning model assessing credit risk — you already have compliance obligations. There is no single AI Act in the UK. Instead, existing regulators apply their own rules to AI within their domains.
This means the obligations that apply to you depend on what your AI does, who it affects, and what data it processes. A recruitment AI triggers employment and equality law. A customer-facing AI processing personal data triggers data protection law. An AI controlling safety-critical equipment triggers health and safety law.
This guide walks you through a structured assessment so you can identify which obligations apply, which regulators oversee your use of AI, and what steps you need to take to comply.
The UK's approach to AI regulation
The UK government has adopted a pro-innovation, sector-specific approach to AI regulation. Rather than creating a single AI regulator or a comprehensive AI Act, the government has asked existing regulators to apply five cross-cutting principles to AI within their remits.
This means the ICO regulates AI that processes personal data, the Equality and Human Rights Commission (EHRC) oversees AI that affects equality, and the Health and Safety Executive (HSE) covers AI in safety-critical environments. Understanding which regulators have jurisdiction over your AI systems is the first step in assessing your obligations.
How to assess your AI compliance obligations
Work through these six steps to build a clear picture of what your business must do. Each step builds on the previous one, so complete them in order.
-
Inventory all AI systems in your business
List every AI tool, algorithm, or automated decision-making system your business uses. Include third-party AI services such as chatbots, recommendation engines, and fraud detection tools. For each system, record its purpose, the decisions it makes or supports, who is affected by those decisions, and whether it operates autonomously or with human oversight. Do not overlook AI embedded in existing software — many CRM, HR, and accounting platforms now include AI features.
-
Identify where AI processes personal data
For each AI system in your inventory, determine whether it collects, stores, analyses, or makes decisions based on personal data. Personal data includes names, email addresses, IP addresses, location data, biometric data, and any information that could identify a living person. If an AI system processes personal data, UK GDPR applies and the ICO is the relevant regulator. Pay particular attention to special category data such as health information, ethnic origin, or trade union membership, which triggers additional safeguards.
-
Map each AI system to the relevant regulators
Use your inventory to identify which regulators have oversight. The ICO covers any AI processing personal data. The EHRC covers AI that could discriminate on protected characteristics. The FCA covers AI used in financial services. The HSE covers AI in safety-critical applications. Ofcom covers AI in online content moderation. The CMA covers AI affecting competition and consumer protection. Many AI systems fall under multiple regulators — a recruitment AI, for example, engages the ICO, EHRC, and potentially the FCA if used in financial services hiring.
-
Conduct a DPIA for high-risk AI processing
If your AI system processes personal data and involves automated decision-making, profiling, large-scale processing of special categories, or systematic monitoring of public spaces, you must conduct a Data Protection Impact Assessment. The DPIA must assess the necessity and proportionality of the processing, identify risks to individuals, and set out measures to mitigate those risks. Consult the ICO if your DPIA identifies high residual risks that you cannot mitigate.
-
Check for equality and discrimination impacts
Assess whether your AI system could directly or indirectly discriminate against people with protected characteristics under the Equality Act 2010. This includes age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, and sexual orientation. Test your AI outputs for bias across these characteristics. Even if you did not intend to discriminate, you are liable if your AI produces discriminatory outcomes.
-
Document your governance arrangements
Record who in your organisation is responsible for each AI system, how decisions are reviewed and challenged, how you monitor for bias and errors, how individuals can contest AI decisions, and your process for updating or withdrawing AI systems that cause harm. This documentation demonstrates accountability to regulators and provides evidence of compliance if you face an investigation.
ICO data protection requirements for AI
If your AI processes personal data, the ICO expects you to meet specific requirements beyond standard UK GDPR compliance. These address the particular risks that AI poses to individuals' rights and freedoms.
DPIA requirements for AI systems
A Data Protection Impact Assessment is mandatory for most AI systems that process personal data. The assessment must be conducted before the processing begins and reviewed whenever the processing changes significantly.
Equality and discrimination obligations
The Equality Act 2010 applies to AI in the same way it applies to human decision-making. If your AI produces outcomes that disproportionately disadvantage people with protected characteristics, you may be liable for indirect discrimination even if the algorithm was not designed to discriminate.
Enforcement risk from multiple regulators
Regulators expect you to be able to explain how your AI systems work and what safeguards you have in place. If you cannot demonstrate that you have assessed your obligations and taken reasonable steps to comply, you face enforcement action from multiple regulators simultaneously. The ICO can fine up to 17.5 million pounds or 4% of annual worldwide turnover. Employment tribunals can award unlimited compensation for discrimination.