Guide
AI compliance for financial services firms
How FCA-regulated firms must govern AI through Consumer Duty, SM&CR, and operational resilience frameworks. Covers model risk management, transparency obligations, and practical steps for compliant AI deployment in financial services.
Use existing FCA rules to manage AI risks in your financial services business. Assign a senior manager to oversee AI decisions. Ensure AI delivers good outcomes for customers and follows data protection laws.
- Apply Consumer Duty to ensure AI benefits customers
- Name a senior manager responsible for AI governance
- Check AI systems meet operational resilience rules
- Follow UK GDPR for AI processing personal data
- Complete a Data Protection Impact Assessment for high-risk AI
- Explain AI decisions to customers clearly
- Keep records of AI model validation and performance
- Prepare for FCA AI Update in September 2025
- Check if AI providers are Critical Third Parties
- Maximum fine for breaches: £17.5m or 4% turnover
The Financial Conduct Authority (FCA) does not have AI-specific rules. Instead, it regulates AI through its existing supervisory frameworks: Consumer Duty, the Senior Managers and Certification Regime (SM&CR), and operational resilience requirements. For FCA-regulated firms, this means AI governance is not a separate compliance workstream but an extension of obligations you already have.
This approach has important practical consequences. There is no single FCA rulebook chapter to consult. Instead, you must map AI usage across multiple regulatory frameworks and ensure that each deployment meets the standards expected under whichever framework applies. The FCA has made clear through speeches, Dear CEO letters, and its AI Update (published April 2024) that it expects firms to manage AI risk proactively, and that existing rules are sufficient to hold firms accountable for AI failures.
Why financial services AI is different
Financial services firms face a more demanding regulatory environment for AI than most other sectors. Several factors make this the case:
- Consumer Duty (July 2023): Requires firms to deliver good outcomes for retail customers. If an AI model produces poor outcomes, even unintentionally, the firm is in breach
- SM&CR accountability: A named senior manager must be accountable for AI governance. There is no hiding behind the technology
- Prudential requirements: AI models used for credit decisions, capital calculations, or risk management engage PRA expectations on model risk
- FOS complaints: Customers harmed by AI decisions can complain to the Financial Ombudsman Service, which can award compensation
The FCA's position, articulated by its Chief Data, Information and Intelligence Officer, is that firms should be innovating with AI but must do so within robust governance frameworks. The regulator is not anti-AI; it is anti-ungoverned AI.
FCA AI governance frameworks
The FCA expects firms to govern AI through three interconnected frameworks. Each imposes specific requirements on how AI is developed, deployed, monitored, and retired.
Data protection for AI in financial services
Financial services firms process large volumes of personal data through AI systems, including credit histories, transaction data, identity documents, and behavioural patterns. UK GDPR applies to all of this processing, and the ICO has specific expectations for AI in financial services, particularly around automated credit decisions and profiling.
Transparency and explainability
Transparency is a recurring theme across FCA, ICO, and PRA expectations. For financial services firms, the transparency obligation operates at multiple levels: explaining to customers how AI affects them, explaining to regulators how AI models work, and explaining to internal governance bodies how models are performing.
The Mills Review and future regulation
In November 2024, the government commissioned Dame Elizabeth Mills to conduct an independent review of AI regulation in financial services. The Mills Review is expected to report in 2026 and may recommend sector-specific AI rules that go beyond the current framework-based approach.
While the review is ongoing, the FCA has signalled that it will not wait for new legislation before taking enforcement action on AI. Firms should not treat the Mills Review as a reason to delay AI governance. The current frameworks, Consumer Duty, SM&CR, and operational resilience, already provide the FCA with sufficient powers to supervise AI use.
Preparing for the Mills Review
Firms that have robust AI governance in place now will be better positioned to adapt to whatever the Mills Review recommends. The review is likely to focus on:
- Model risk management standards for AI
- Explainability requirements for customer-facing AI
- Third-party AI model oversight (including foundation models)
- Cross-regulator coordination between FCA, PRA, and ICO
Practical steps for FCA-regulated firms
These actions address specific FCA expectations. They are ordered by priority, starting with governance accountability and moving through to ongoing monitoring.
-
1. Identify the accountable senior manager
Under SM&CR, designate a senior manager with explicit responsibility for AI governance. This should be documented in their Statement of Responsibilities. The FCA expects a named individual, not a committee, to be accountable for AI outcomes. For most firms, this will be the Chief Risk Officer, Chief Technology Officer, or Chief Operating Officer.
-
2. Map all AI models in use
Create and maintain a comprehensive inventory of every AI and algorithmic model used across the firm. For each model, document its purpose, the data it processes, the decisions it influences, the vendor (if third-party), and the date of last validation. Include models used by outsourced service providers where they affect customer outcomes.
-
3. Validate models against Consumer Duty outcomes
For each AI model that affects retail customers, assess whether it delivers good outcomes across the four Consumer Duty outcome areas: products and services, price and value, consumer understanding, and consumer support. Document your assessment and review it at least annually. If a model produces poor outcomes for any identifiable group, take corrective action promptly.
-
4. Implement model risk management
Establish a model risk management framework covering development, testing, deployment, monitoring, and retirement of AI models. Include independent model validation, performance monitoring thresholds, and escalation procedures when models drift or underperform. The PRA's SS1/23 on model risk management provides a useful reference, even for solo-regulated FCA firms.
-
5. Monitor consumer outcomes continuously
Set up ongoing monitoring of AI-driven consumer outcomes, disaggregated by relevant characteristics where possible (age, vulnerability status, product type). The FCA expects firms to detect and address emerging harm proactively, not wait for complaints. Dashboard monitoring with automated alerts is good practice.
-
6. Document governance for regulatory inspection
Maintain documentation sufficient to explain to FCA supervisors how each AI model works, how it was validated, who is accountable, and what monitoring is in place. The FCA's approach is to test governance through supervisory visits and data requests. Firms that cannot explain their AI governance will face heightened scrutiny.