Guide
AI transparency and explainability obligations
What transparency and explainability mean for AI systems and how to meet the obligations. Covers UK GDPR requirements for automated decision-making, ICO expectations, and practical approaches to making AI decisions understandable to the people they affect.
When your business uses AI to make or support decisions about people, those people have a right to understand how and why those decisions were reached. This is not just good practice — it is a legal requirement under UK GDPR and the Data (Use and Access) Act 2025.
Transparency and explainability are related but distinct concepts. Transparency means being open about the fact that you use AI and what it does. Explainability means being able to describe how a specific AI decision was reached in terms the affected person can understand.
Many businesses struggle with explainability because AI systems — particularly deep learning models — can be difficult to interpret even for the people who built them. But regulators do not expect you to provide a mathematical proof of every decision. They expect you to provide a meaningful, accessible explanation that is proportionate to the impact of the decision on the individual.
What the law requires
UK GDPR creates specific transparency obligations for automated decision-making and profiling. Under Articles 13 and 14, when you collect personal data you must tell people about:
- The existence of any automated decision-making, including profiling
- Meaningful information about the logic involved
- The significance and envisaged consequences for the individual
Under Article 22, where decisions are based solely on automated processing and produce legal or similarly significant effects, individuals have the right to:
- Not be subject to the decision (with certain exceptions)
- Obtain human intervention
- Express their point of view
- Contest the decision
The Data (Use and Access) Act 2025 has reformed these provisions. The revised Article 22A introduces a broader right to meaningful information about automated decisions and strengthens the right to human review. Businesses must now provide explanations that are genuinely useful to the individual, not just technically accurate.
ICO requirements for AI transparency
The ICO has published detailed guidance on what it expects from organisations using AI. The ICO's approach goes beyond the minimum legal requirements and sets out best practice that it will use when assessing compliance during audits and investigations.
The ICO expects organisations to be transparent about AI at three levels:
- Organisational level: Publish a clear statement about how your organisation uses AI, what types of decisions it supports, and how you govern it. This can be part of your privacy notice or a standalone AI transparency statement.
- System level: For each AI system, document its purpose, the data it uses, how it was trained, how it makes decisions, and how you test for accuracy and bias. Make this available to regulators on request.
- Individual level: When an AI decision affects a specific person, provide them with a meaningful explanation of how that decision was reached and what they can do if they disagree.
Practical approaches to explainability
Explainability is not one-size-fits-all. The right approach depends on the type of AI you use, the impact of the decision, and the audience for the explanation. Here are practical strategies that work for different situations.
For rule-based and decision-tree AI
If your AI follows explicit rules or a decision tree, explainability is straightforward. You can trace the path from inputs to output and present the key factors that determined the outcome. For example: "Your application was declined because your annual turnover is below the minimum threshold of 50,000 pounds and your trading history is less than 12 months."
For machine learning models
Machine learning models are harder to explain because they learn patterns from data rather than following explicit rules. Useful techniques include:
- Feature importance: Identify which input variables had the most influence on the decision. For example: "The three most important factors in your credit score were payment history (40%), outstanding debt (30%), and length of credit history (20%)."
- Counterfactual explanations: Explain what would need to change for a different outcome. For example: "Your application would have been approved if your annual revenue exceeded 100,000 pounds."
- Example-based explanations: Show similar cases and their outcomes to help the person understand the decision in context.
For large language models and generative AI
Generative AI presents particular challenges for explainability because these models produce novel outputs rather than selecting from predefined options. Focus on:
- Being clear that the output was generated by AI, not written by a human
- Explaining the purpose and limitations of the AI system
- Providing a route for human review if the output is used to make decisions about people
- Documenting the prompts, training data, and guardrails that shape the AI's behaviour
Tailoring explanations to the audience
A customer, an employee, a regulator, and a data scientist each need different levels of detail. Write your explanations in plain language for the people affected by the decision. Keep technical details for internal documentation and regulatory submissions.
The ICO recommends using layered explanations: a short, simple explanation upfront, with the option to access more detail if the person wants it. This mirrors the approach recommended for privacy notices.