Manufacturing & EngineeringRetail & Consumer GoodsTechnology & Digital UK-wide

The EU AI Act entered into force in August 2024 and is being implemented in stages through to 2027. Even though the UK is no longer an EU member state, the Act has extraterritorial scope — it applies to any business that places an AI system on the EU market or whose AI system outputs are used within the EU.

If your business develops AI products, integrates AI into services, or deploys AI tools that affect EU customers, employees, or partners, you need to understand what the EU AI Act requires and when each obligation takes effect. Failure to comply can result in fines of up to 35 million EUR or 7% of global annual turnover.

This guide explains the risk classification framework, what is already in force, what is coming next, and how the EU regime sits alongside the UK's own approach to AI regulation.

Risk classification

The EU AI Act takes a risk-based approach. Every AI system falls into one of four tiers, and the obligations you face depend on which tier applies. The higher the risk, the stricter the requirements.

In practice, most commercial AI applications fall into the minimal or limited risk tiers and face few additional obligations. The critical question for UK businesses is whether any of your AI systems qualify as high-risk — particularly those used in employment decisions, credit scoring, or access to essential services.

If you are unsure which tier applies, start by listing every AI system your business uses or provides. For each one, check whether it falls within a high-risk category. The EU AI Office has published guidance to help businesses classify their systems.

Implementation timeline

The EU AI Act does not come into force all at once. Different obligations apply at different dates, and some are already in effect.

The most immediate concern for UK businesses is the 2 August 2026 deadline for high-risk AI systems. If you deploy AI in employment, credit, education, or essential services within the EU, you should already be preparing for conformity assessments and technical documentation requirements.

Businesses that provide general-purpose AI models should note that GPAI obligations have been in force since August 2025. If you have not yet assessed your compliance position, do so now — enforcement action is possible from the EU AI Office.

High-risk AI obligations

High-risk AI systems face the most demanding requirements under the Act. These are AI systems whose failure or misuse could cause significant harm to health, safety, or fundamental rights.

If any of your AI systems fall within these categories and you deploy them in the EU or they affect EU-based individuals, you must meet the full suite of high-risk obligations by 2 August 2026. This includes:

  • Risk management system — identify, analyse, and mitigate risks throughout the AI system's lifecycle
  • Data governance — ensure training data is relevant, representative, and free from errors that could lead to discrimination
  • Technical documentation — maintain detailed records of system design, development, testing, and performance
  • Human oversight — design the system so a human can effectively oversee its operation and intervene when necessary
  • Conformity assessment — demonstrate compliance through self-assessment or third-party audit, depending on the category

Some categories (such as AI in recruitment or credit scoring) require third-party conformity assessment by a notified body. Others allow self-assessment. Check which assessment route applies to each of your high-risk systems.

General-purpose AI (GPAI) obligations

The GPAI provisions affect UK-based AI model developers who make their models available in the EU — including through API access, licensing, or open-source distribution where the model is used commercially.

These obligations have been in force since 2 August 2025. If you develop or provide a general-purpose AI model that is used by downstream deployers in the EU, you should already have:

  • Prepared technical documentation describing the model's capabilities and limitations
  • Published a sufficiently detailed summary of training content
  • Assessed whether your model exceeds the systemic risk threshold (1025 FLOPs of training compute)
  • If systemic: conducted adversarial testing and established incident reporting processes

The EU AI Office is the competent authority for GPAI enforcement and has begun publishing codes of practice. Monitor these closely, as they will define what compliance looks like in practice.

Penalties

The penalty regime is designed to be dissuasive, particularly for larger businesses. The fines are calculated as the higher of a fixed amount or a percentage of global turnover.

SMEs and startups benefit from reduced caps — the lower of the fixed amount or the percentage applies, rather than the higher. Even so, a 1% turnover fine for providing incorrect information to authorities is a meaningful sum for most businesses.

Enforcement will be shared between the EU AI Office (for GPAI models) and national market surveillance authorities in each EU member state (for high-risk systems). UK businesses deploying AI in multiple EU countries may face scrutiny from more than one authority.

How this interacts with UK regulation

The UK does not have an equivalent single AI law. Instead, the UK takes a principles-based, sector-specific approach where existing regulators (FCA, ICO, CMA, EHRC, Ofcom, MHRA) apply their existing frameworks to AI within their domains.

For UK businesses, this creates the potential for dual compliance:

  • EU AI Act — mandatory if you place AI on the EU market or serve EU users
  • UK sector regulators — mandatory for activities regulated in the UK (e.g. FCA for financial services AI, ICO for automated decision-making under UK GDPR, MHRA for AI medical devices)
  • UK AI Safety Institute — voluntary engagement for frontier AI models, but increasingly expected for responsible development

In practice, many EU AI Act requirements overlap with existing UK obligations. A robust risk management process, transparent documentation, and meaningful human oversight will satisfy both regimes. However, the EU Act's conformity assessment and CE marking requirements are specific to the EU and have no UK equivalent.

The Digital Markets, Competition and Consumers Act 2024 (DMCCA) is the closest UK legislation in this space, but it focuses on digital markets competition rather than AI risk classification. Treat EU and UK requirements as complementary, not interchangeable.