The EU AI Act is the world's first comprehensive legal framework regulating artificial intelligence. It entered into force in August 2024 and will apply in stages through to 2027. If your organisation operates in EU markets, sells AI-powered products to EU customers, or uses AI systems that affect EU citizens, this regulation applies to you — regardless of where you're headquartered.
The Risk-Based Framework
The EU AI Act takes a risk-based approach, classifying AI systems into four tiers:
- Unacceptable risk (prohibited): AI systems that manipulate human behaviour subconsciously, exploit vulnerabilities, conduct real-time biometric surveillance in public spaces, and social scoring systems. These are banned outright.
- High risk: AI used in critical infrastructure, education, employment, essential services, law enforcement, migration, and the administration of justice. Subject to the most stringent requirements.
- Limited risk: Systems like chatbots that interact with humans. Primarily transparency obligations — users must be informed they're interacting with AI.
- Minimal risk: The vast majority of AI applications (spam filters, AI in video games, etc.). No specific obligations under the Act.
Key Obligations for High-Risk Systems
If your AI system falls into the high-risk category, you must:
- Establish a risk management system and document it throughout the system lifecycle
- Ensure data governance — training data must be relevant, representative, and free from discriminatory bias
- Maintain comprehensive technical documentation
- Enable logging and audit trails for automatic record-keeping
- Provide transparency and information to users
- Ensure human oversight mechanisms are built in
- Meet accuracy, robustness, and cybersecurity standards
Important: High-risk AI systems must undergo a conformity assessment before being placed on the EU market. This is not a box-ticking exercise — it requires substantial technical and governance documentation.
What UK Businesses Need to Do Now
- Audit your AI inventory. Map every AI system you use or sell — whether built in-house, purchased, or embedded in third-party tools.
- Classify each system. Determine whether each system is prohibited, high-risk, limited-risk, or minimal-risk under the Act's definitions.
- Gap analysis. For high-risk systems, identify the gap between your current documentation and governance practices and what the Act requires.
- Build compliance into your AI development process. Retrofitting compliance onto existing systems is significantly more expensive than building it in from the start.
The EU AI Act is the beginning of a global regulatory convergence around AI governance. Even organisations that are not currently subject to it would be wise to build compliant practices now — both to prepare for the UK's own evolving AI regulatory framework and to meet the growing governance expectations of enterprise clients and procurement processes.
Need Help with EU AI Act Compliance?
Our AI Governance practice helps organisations audit their AI systems, classify risk levels, and build compliant frameworks that satisfy regulators without slowing down innovation.
Book a Discovery Call →