Guide
AI product safety: how product liability law applies to AI
How existing UK product safety law applies to AI products and AI components embedded in physical goods. Explains the Consumer Protection Act 1987 strict liability framework, how the Product Regulation and Metrology Act 2025 extends safety duties to AI as an intangible component, enforcement bodies, and how UK rules compare with the EU's updated product liability regime.
There is a common misconception that AI is unregulated in the UK. In reality, businesses that manufacture, import, or supply products containing AI are already subject to UK product safety law — specifically the Consumer Protection Act 1987 (CPA) and the General Product Safety Regulations 2005 (GPSR). The Product Regulation and Metrology Act 2025 (PRMA) extends this framework further, classifying AI algorithms and software as intangible components of products and enabling future regulations on AI product safety.
Understanding how these rules apply to AI is not just an abstract legal question. If a person is injured or suffers loss because an AI system behaved unsafely — whether that system was in a consumer appliance, a piece of industrial equipment, a medical device, or a vehicle — your business may face civil compensation claims, criminal prosecution, and regulatory enforcement simultaneously. The question is not whether product safety law covers your AI product, but whether your compliance programme reflects that it does.
What counts as an AI product for these purposes
UK product safety law does not define "AI product" as a distinct category. Instead, it covers products — physical goods placed on the market — and the question is whether AI functionality forms part of that product.
Product safety obligations apply to AI in the following scenarios:
- AI embedded in physical goods: A smart appliance with machine learning features, a robot that adapts its behaviour, autonomous industrial equipment, or a medical device with diagnostic AI — the AI is part of the product and the product must be safe overall
- AI as a software update: If an AI system is deployed as a software update to an existing product, the updated product must remain compliant. The PRMA 2025 enables regulations to address lifecycle safety obligations including post-sale software changes
- AI components supplied to assemblers: If you supply an AI module that another manufacturer incorporates into a finished product, you may have responsibilities as a component supplier, including providing safety information to the assembler
The framework is less clear for standalone AI software without a physical product host — this is an area where the PRMA 2025's enabling powers will be important as secondary legislation develops. For now, standalone software is not treated as a "product" for CPA 1987 purposes, though the GPSR and sector-specific regulations (for example, for software as a medical device) may still apply.
How the Consumer Protection Act 1987 applies to AI products
Why strict liability matters for AI
The strict liability framework of CPA 1987 Part I creates particular challenges for AI products. Traditional product defect claims ask whether the product was safe — a relatively static assessment. AI-powered products introduce dynamic behaviour that can change over time as the system learns, adapts, or is updated.
The defect test under the CPA asks whether the product's safety is not such as persons generally are entitled to expect. For an AI product, the relevant expectations are shaped by:
- How the product is marketed and what its instructions say about safe use
- What safety behaviours a reasonable user would assume the AI system exhibits
- Whether the AI's behaviour was consistent with reasonably foreseeable use, even if the user's behaviour was not exactly as intended
- The state of the art at the time the product was supplied (relevant to the development risks defence)
The fact that an AI system behaved as it was programmed, or that its behaviour was an emergent property of its training, does not provide a defence. If the output of that behaviour caused damage, the producer may be liable.
The development risks defence and AI
The development risks defence under CPA 1987 s.4(1)(e) allows producers to escape liability where the state of scientific and technical knowledge at the time of supply was not such that the defect could have been discovered. For AI products, this defence is difficult to rely on. Courts are likely to examine whether the developer had access to testing frameworks, safety evaluation methods, or red-team assessments that could have identified problematic behaviour. Businesses that document their safety testing thoroughly are better positioned to argue the defence if it arises.
The Product Regulation and Metrology Act 2025 and AI
The PRMA 2025's classification of AI as an intangible product component matters for practical compliance planning. It signals the government's intent to regulate AI product safety through secondary legislation and establishes the legal architecture for doing so. Businesses developing AI-enabled products should:
- Monitor GOV.UK for consultations on secondary legislation under the PRMA, particularly on AI product safety, online marketplace duties, and software lifecycle obligations
- Build documentation and testing practices now that will support compliance when secondary legislation arrives — changing engineering processes later is far more costly than building them in from the start
- Understand that the PRMA also gives powers to require a UK Responsible Person for imported AI products — a role that will carry statutory compliance obligations when regulations are made
Enforcement: who regulates AI product safety
The primary enforcement bodies for consumer product safety in Great Britain are the Office for Product Safety and Standards (OPSS) and local authority Trading Standards services. For AI products used in workplaces, the Health and Safety Executive (HSE) has jurisdiction over risks to workers and the public.
OPSS has powers to issue market surveillance authorities with instructions, coordinate cross-border issues, and take direct enforcement action on systemic or high-risk products. Trading Standards handles local enforcement including test purchases, inspections, and criminal prosecution.
For AI products specifically, multiple regulators may have concurrent jurisdiction:
- MHRA: For AI classified as a medical device or in-vitro diagnostic device under the Medical Devices Regulations 2002
- CAA: For AI in aviation products
- DVSA/VCA: For AI in road vehicles
- HSE: For AI in machinery and workplace equipment under the Supply of Machinery (Safety) Regulations 2008
- ICO: Where the AI product processes personal data — UK GDPR obligations apply alongside product safety duties
When an AI product fails, you may face enforcement action from more than one regulator. Building compliance programmes that address all relevant frameworks simultaneously is more efficient than treating each regulator separately.
How UK rules compare with the EU
If your business supplies AI products to the EU market, you need to understand two significant developments in EU law that go further than current UK rules.
EU Product Liability Directive (2024/2853)
The new EU Product Liability Directive, which entered into force in late 2024 with a transposition deadline for EU member states of December 2026, explicitly covers AI systems and software as products. Key changes relevant to AI businesses include:
- Software (including AI systems) is explicitly classified as a product — closing the gap that existed under the old Directive
- A rebuttable presumption of defect where a producer fails to disclose relevant information — significant given AI opacity
- Disclosure obligations: where it is excessively difficult for a claimant to prove defect or causation, the court can order the defendant to disclose evidence
- Extended limitation period: 25 years for latent damage in some circumstances
EU AI Act risk categories
For AI systems classified as high-risk under the EU AI Act (which applies from August 2026 for most categories), additional conformity assessment, documentation, and registration obligations apply that go beyond product safety requirements. These cover AI used in critical infrastructure, education, employment, essential services, law enforcement, migration, administration of justice, and democratic processes.
UK businesses selling AI products into the EU will need to comply with both the EU Product Liability Directive and the EU AI Act, as well as continuing to meet UK product safety requirements for their domestic market. The two regimes overlap but are not identical — early legal advice on dual compliance is advisable for significant AI product launches.
How this connects to your broader compliance picture
AI product safety sits at the intersection of product safety law, sector-specific regulation, data protection, and emerging AI-specific frameworks. For businesses developing or selling AI-enabled products, the practical implications are:
- Safety by design: Build safety assessment into the development process, not as an afterthought. Document your risk assessment, testing methodology, and the measures taken to address identified risks
- Supply chain due diligence: If you incorporate AI components from third parties, understand their safety properties and what claims you can make about the finished product
- Post-market surveillance: Monitor your AI product in use. Where the AI can learn or change behaviour, establish systems to detect and respond to safety incidents after sale
- Recall readiness: Have a recall plan before you need one. AI product recalls may require software updates rather than physical retrieval, but the legal framework for recall remains the same
- Insurance: Review whether your product liability insurance covers AI-specific scenarios, including AI system failures that cause personal injury or property damage