Search

What the AI Act Is

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for regulating artificial intelligence. It entered into force on 1 August 2024, with its provisions phasing in over a staggered timeline through to August 2027. The regulation applies across the European Union and, like the GDPR, has significant extraterritorial reach.

The AI Act is not a ban on AI. It is a risk-based regulatory framework that imposes different obligations depending on the risk level of the AI system in question. Most AI applications fall into low-risk or minimal-risk categories and face light or no regulatory obligations. A smaller subset — those that pose significant risks to health, safety, or fundamental rights — face substantial compliance requirements. A narrow category of AI practices is prohibited outright.

The regulation sits alongside, rather than replacing, existing EU legislation. The GDPR continues to govern personal data processing. Sector-specific regulations (medical devices, financial services, employment law) continue to apply in their domains. The AI Act adds a horizontal layer of AI-specific requirements on top of this existing framework.

Who It Applies To

The AI Act applies to several categories of actors, but the two most important for businesses are providers and deployers.

Providers are the entities that develop an AI system or have an AI system developed on their behalf and place it on the market or put it into service under their own name or trademark. If you build AI tools — whether for sale to others or for internal use — you are likely a provider.

Deployers are the entities that use an AI system under their authority, except where the system is used in the course of a personal non-professional activity. If you use AI tools in your business operations — a recruitment screening tool, a credit scoring system, a customer service chatbot — you are likely a deployer.

Many businesses are both. A company that develops an AI-powered product for its clients (provider) while also using third-party AI tools for internal operations (deployer) faces obligations in both roles.

The AI Act also applies to importers, distributors, and product manufacturers in certain circumstances, and it has extraterritorial effect: providers outside the EU whose AI systems are placed on the market or used in the EU must comply, and deployers outside the EU whose AI systems produce output used in the EU are also caught.

The Risk-Based Framework

The AI Act categorises AI systems into four risk tiers, each with different regulatory consequences.

Prohibited AI practices. A narrow set of AI applications is banned outright. These include social scoring systems by public authorities, real-time remote biometric identification in publicly accessible spaces for law enforcement (with limited exceptions), AI that exploits vulnerabilities of specific groups (age, disability), AI that manipulates behaviour in ways that cause significant harm, emotion recognition in workplaces and educational institutions (with narrow exceptions), and untargeted scraping of facial images to build facial recognition databases. The prohibitions took effect on 2 February 2025.

High-risk AI systems. AI systems that pose significant risks to health, safety, or fundamental rights are classified as high-risk and must comply with extensive requirements. High-risk systems include AI used in biometric identification and categorisation, critical infrastructure management, education and vocational training (scoring, access decisions), employment and worker management (recruitment, task allocation, performance monitoring), access to essential services (credit scoring, insurance risk assessment), law enforcement, migration and border control, and the administration of justice. High-risk systems must comply with requirements for risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity. Compliance obligations for high-risk AI systems apply from 2 August 2026.

Limited-risk AI systems. AI systems that interact with people, generate synthetic content, or perform emotion recognition or biometric categorisation face transparency obligations. Users must be informed that they are interacting with an AI system, that content has been artificially generated or manipulated, or that emotion recognition or categorisation is taking place. Chatbots, deepfake generators, and AI content creation tools fall into this category.

Minimal-risk AI systems. The vast majority of AI applications — spam filters, AI-assisted inventory management, recommendation engines, most business analytics tools — are classified as minimal risk and face no mandatory obligations under the AI Act, though providers are encouraged to voluntarily adopt codes of practice.

General-Purpose AI Models

The AI Act introduces a separate regime for general-purpose AI (GPAI) models — foundation models like large language models that can be used for a variety of purposes. GPAI providers must comply with transparency requirements including publishing a sufficiently detailed summary of training data, complying with EU copyright law (including the text and data mining opt-out), and drawing up and maintaining technical documentation.

GPAI models that pose systemic risks — those trained with a cumulative compute exceeding 10^25 FLOPs, or designated as such by the Commission — face additional obligations including model evaluation, adversarial testing, incident reporting, and cybersecurity measures. The GPAI provisions apply from 2 August 2025.

The Compliance Timeline

The AI Act phases in over several years. The key dates are as follows.

1 August 2024: the regulation entered into force. 2 February 2025: prohibitions on banned AI practices took effect, and the AI literacy obligation under Article 4 became enforceable. 2 August 2025: rules on GPAI models apply, codes of practice provisions take effect, and governance structures (the AI Office, the AI Board, the advisory forum) are fully operational. 2 August 2026: most provisions apply, including the requirements for high-risk AI systems, deployer obligations, and transparency rules. 2 August 2027: obligations for high-risk AI systems that are components of regulated products (medical devices, automotive, aviation) take effect.

This staggered timeline gives businesses time to prepare, but the early deadlines — particularly the AI literacy requirement and the prohibited practices — are already enforceable.

What Compliance Looks Like

For most businesses, AI Act compliance involves several practical workstreams.

AI inventory. Before you can assess your obligations, you need to know what AI systems you use and provide. This means cataloguing every AI tool in your organisation — including those embedded in third-party software that you may not think of as AI. Many businesses are surprised by the breadth of their AI footprint once they conduct a systematic inventory.

Risk classification. For each AI system in your inventory, determine which risk tier it falls into. Most will be minimal or limited risk. Those that are high-risk require the most attention.

Gap analysis. For high-risk systems, compare your current practices against the AI Act requirements for risk management, data governance, documentation, transparency, human oversight, and cybersecurity. Identify the gaps.

Provider assessment. If you use AI systems provided by third parties, assess whether your providers are preparing for compliance. The AI Act places primary obligations on providers of high-risk systems, but deployers also have obligations — and a deployer cannot simply rely on the provider’s compliance without verification.

AI literacy. Article 4 requires that providers and deployers ensure their staff and other persons dealing with AI systems on their behalf have a sufficient level of AI literacy, taking into account their technical knowledge, experience, education, and training, as well as the context and intended use of the AI systems. This obligation is already in force.

Governance framework. Establish internal governance for AI — policies, procedures, accountability structures, and oversight mechanisms. This does not need to be a separate bureaucracy; for most businesses, it can be integrated into existing compliance and risk management structures.

Penalties

The AI Act establishes a tiered penalty framework. Violations of the prohibited practices provisions can result in fines of up to EUR 35 million or 7% of annual global turnover, whichever is higher. Non-compliance with high-risk AI system requirements can result in fines of up to EUR 15 million or 3% of turnover. Supplying incorrect information to authorities can result in fines of up to EUR 7.5 million or 1% of turnover. SMEs and startups benefit from proportionate caps on penalties.

Practical First Steps

If you have not started AI Act compliance, the most productive first steps are conducting an AI inventory across your organisation, screening that inventory for prohibited practices, assessing AI literacy among staff who interact with AI systems, identifying any high-risk AI systems in your inventory, and beginning to review your contracts with AI providers for compliance representations and allocation of responsibilities.

The AI Act is complex, but the core principle is straightforward: the higher the risk, the greater the obligation. Most businesses will find that the majority of their AI use falls into low-risk categories with minimal regulatory burden. The effort concentrates on the smaller number of systems that genuinely affect people’s rights and safety.

If you need help assessing your AI Act obligations or building an AI governance framework, get in touch or schedule a meeting with our team.

Bart Lieben
Attorney-at-Law
key takeaways

More related articles

Pitch Chatbot
Contact us right away
Pitch Chatbot
Hi there,
How can we help you today?
Start Whatsapp Chat
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info
No items found.