Search

Why the Timeline Matters

The EU AI Act does not switch on all at once. Unlike the GDPR, which had a single enforcement date after a two-year grace period, the AI Act phases in over three years through a series of staggered deadlines. Different obligations take effect at different times, depending on the risk level of the AI system and the nature of the requirement.

This staggered approach is both a feature and a trap. The feature is that it gives businesses time to prepare for the more demanding requirements. The trap is that early deadlines are easy to overlook, and some obligations — particularly the AI literacy requirement and the prohibited practices rules — are already enforceable. Businesses that assume they have until 2026 or 2027 to start preparing are already behind.

1 August 2024: Entry into Force

The AI Act entered into force on 1 August 2024, twenty days after its publication in the Official Journal of the European Union. This date started the clock on all subsequent deadlines but did not itself trigger any substantive obligations on businesses. It marks the beginning of the phasing-in period.

2 February 2025: Prohibited Practices and AI Literacy

Six months after entry into force, two categories of obligations became enforceable.

Prohibited AI practices (Article 5). From this date, the AI practices listed in Article 5 are banned. Any business operating a prohibited AI system — social scoring, subliminal manipulation, exploitation of vulnerabilities, untargeted facial recognition scraping, emotion recognition in workplaces or education (with narrow exceptions), biometric categorisation of sensitive attributes, individual predictive policing, and real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions) — must have discontinued these practices by this date.

AI literacy (Article 4). Providers and deployers must ensure that their staff and other persons dealing with AI systems on their behalf have a sufficient level of AI literacy. This obligation is broad — it applies to all providers and deployers, regardless of the risk level of their AI systems — and it is already enforceable.

AI literacy does not mean that every employee needs a technical degree. The obligation is proportionate: the required level of literacy depends on the person’s role, technical knowledge, experience, and the context in which they use AI. A marketing manager using an AI content generation tool needs to understand what the tool can and cannot do, what its limitations are, and where human judgment remains essential. A data scientist developing AI models needs deeper technical understanding. The obligation is about ensuring that people who work with AI understand enough to use it responsibly.

Practical steps to comply with the AI literacy requirement include identifying which staff members interact with AI systems, assessing their current understanding of AI capabilities and limitations, providing targeted training appropriate to their role and the systems they use, documenting the training provided, and establishing a mechanism for ongoing updates as AI capabilities and regulations evolve.

2 August 2025: General-Purpose AI and Governance

Twelve months after entry into force, obligations related to general-purpose AI models and governance structures take effect.

General-purpose AI model obligations (Chapter V). Providers of GPAI models — foundation models, large language models, and similar general-purpose systems — must comply with transparency and documentation requirements. This includes making available a sufficiently detailed summary of the content used for training, drawing up and maintaining technical documentation, establishing a policy to comply with EU copyright law (including respecting text and data mining opt-outs), and providing downstream providers with sufficient information to comply with their own obligations.

GPAI models classified as posing systemic risks face additional obligations: model evaluation, adversarial testing, tracking and reporting of serious incidents, ensuring adequate cybersecurity protections, and reporting the energy consumption of the model.

Codes of practice. The AI Office is tasked with facilitating the drawing up of codes of practice for GPAI providers, covering transparency obligations, copyright compliance, and risk identification. These codes are intended to provide practical guidance on how to meet the regulatory requirements.

Governance structures. The institutional framework for AI Act enforcement becomes fully operational: the AI Office within the European Commission, the European Artificial Intelligence Board (comprising member state representatives), the advisory forum (stakeholder consultation body), and the scientific panel of independent experts. National competent authorities designated by member states are responsible for market surveillance and enforcement at the national level.

Penalty provisions (Articles 99–101). The penalty framework applies from this date, meaning that violations of the provisions already in force (prohibited practices and AI literacy) can be sanctioned. Member states must have laid down rules on penalties by this date.

2 August 2026: The Main Compliance Date

Two years after entry into force, the majority of the AI Act’s provisions become applicable. This is the date most businesses should be working towards.

High-risk AI system requirements (Chapter III). The full set of obligations for high-risk AI systems takes effect: risk management systems, data governance, technical documentation, record-keeping, transparency and instructions for use, human oversight, accuracy, robustness, and cybersecurity. Providers of high-risk systems must have conformity assessments completed and systems registered in the EU database before placing them on the market.

Deployer obligations for high-risk systems. Deployers of high-risk AI systems must comply with their own set of obligations, including using the system in accordance with the provider’s instructions, ensuring human oversight, monitoring the system’s operation, keeping logs generated by the system, conducting fundamental rights impact assessments (for certain deployers), and informing affected individuals.

Transparency obligations for limited-risk systems (Article 50). The transparency requirements for chatbots, deepfakes, emotion recognition, and biometric categorisation become enforceable.

Obligations for importers and distributors. Entities that import or distribute AI systems in the EU must comply with their respective obligations, including verifying that the provider has completed conformity assessments and that the system bears the required CE marking.

Registration obligations. High-risk AI systems must be registered in the EU database before being placed on the market or put into service. Deployers that are public bodies must also register.

2 August 2027: Regulated Products

Three years after entry into force, obligations take effect for high-risk AI systems that are safety components of products covered by specific EU harmonisation legislation listed in Annex I. These include medical devices, in vitro diagnostic medical devices, civil aviation products, motor vehicles, agricultural and forestry vehicles, marine equipment, railway systems, and personal protective equipment, among others.

The extended timeline for these systems reflects the fact that they are already subject to existing sector-specific regulatory frameworks, and additional time is needed to integrate AI Act requirements into those frameworks’ conformity assessment procedures.

What This Means for Your Planning

Already overdue (February 2025 deadline). If you have not yet screened your AI systems for prohibited practices and assessed AI literacy across your organisation, these are your immediate priorities. The obligations are enforceable now.

Imminent (August 2025). If you provide or use general-purpose AI models, you should be preparing your transparency documentation, training data summaries, and copyright compliance policies. The GPAI obligations take effect soon.

Primary planning horizon (August 2026). If you provide or deploy high-risk AI systems, the full compliance framework takes effect in August 2026. This means completing your AI inventory and risk classification, implementing risk management systems for high-risk AI, preparing technical documentation, establishing human oversight mechanisms, completing conformity assessments, registering systems in the EU database, and training deployer staff on their specific obligations.

Eighteen months of preparation time is not generous for organisations with complex AI deployments. The conformity assessment process alone can take months, and remediation of gaps identified during assessment may require significant technical work. Businesses that start preparation in early 2026 may find themselves unable to complete it by August.

Extended timeline (August 2027). If your AI systems are safety components of regulated products, you have an additional year. But this should not be treated as a reason to delay — the product-level integration of AI Act requirements with existing conformity assessment procedures is complex and benefits from early planning.

National Implementation

While the AI Act is a regulation (directly applicable across the EU without transposition), member states have significant roles in implementation. Each member state must designate national competent authorities for market surveillance and enforcement, establish rules on penalties, create regulatory sandboxes for testing innovative AI systems, and provide support for SMEs.

The pace and approach of national implementation varies. Some member states are moving quickly; others are still in the early stages. Businesses operating across multiple EU member states should monitor national developments, as enforcement approaches and regulatory sandboxes may differ by country.

If you need to plan your AI Act compliance programme or assess which deadlines affect your business, get in touch or schedule a meeting with our team.

Bart Lieben
Attorney-at-Law
key takeaways

More related articles

Pitch Chatbot
Contact us right away
Pitch Chatbot
Hi there,
How can we help you today?
Start Whatsapp Chat
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info
No items found.