The EU AI Act does not impose all of its obligations simultaneously. Its compliance requirements apply in a sequence of deadlines tied to the entry into force date of 1 August 2024. Understanding this timeline is essential for any organisation planning its AI Act compliance programme. Acting on the wrong deadline, or missing a deadline that has already passed, creates immediate regulatory exposure.
The Act's structure can be understood in four main phases. Phase 1 addressed prohibited AI practices, which became unlawful from 2 February 2025. Phase 2 addressed general-purpose AI (GPAI) model obligations, applicable from 2 August 2025. Phase 3 addresses high-risk AI systems listed in Annex III, where the full compliance framework applies from 2 August 2026. Phase 4 addresses high-risk AI systems embedded in products regulated under existing EU product safety legislation (Annex I), which benefit from extended transition periods running to 2027 and 2028 depending on the applicable product regime. Two additional categories, covering AI systems in legal and judicial contexts and some public sector AI, have transition periods extending to 2030.
Article 5's prohibition on unacceptable-risk AI practices became applicable six months after entry into force, on 2 February 2025. This phase also coincided with the entry into force of the AI Office's governance structure and the European AI Board. Organisations should have completed their assessment of prohibited practices in advance of this date and ceased any activities that fall within the prohibited categories.
Beyond prohibited practices, Phase 1 also marked the beginning of the period for the Commission to develop delegated acts, guidance documents, and harmonised standards. These form the secondary legislative architecture that fleshes out the Act's requirements. Organisations planning their compliance programmes should monitor these developing standards, as they will define the technical requirements for conformity assessments in ways that the Regulation itself does not specify.
The obligations applicable to general-purpose AI (GPAI) model providers applied from 2 August 2025. These include maintaining technical documentation of the model's design and training, compliance with EU copyright law and transparency requirements regarding training data, publishing a sufficiently detailed summary of the training data content for public disclosure purposes, and implementing a policy to comply with copyright law including opt-out mechanisms for rights-holders.
GPAI models that present systemic risk, defined by a training compute threshold of 10^25 FLOPs or by Commission designation, face additional obligations from the same date: adversarial testing (red-teaming) before placement on the market, notification and reporting of serious incidents to the AI Office, cybersecurity measures, and energy efficiency reporting. The AI Office is the primary supervisory authority for GPAI models, operating within the European Commission's structure.
The principal deadline for most businesses is 2 August 2026, when the full compliance framework for high-risk AI systems listed in Annex III becomes applicable. From this date, providers of high-risk AI systems must have completed their conformity assessment, registered their system in the EU AI database, and maintained all required technical documentation and post-market monitoring procedures. Deployers of high-risk AI systems must have conducted their fundamental rights impact assessments, implemented appropriate human oversight, and notified workers' representatives as required.
The period between now and August 2026 is an active compliance window, not a waiting period. Conformity assessments for complex systems take months to complete. Technical documentation must be developed contemporaneously with the system's design and development, not assembled retroactively. Organisations that begin this process in 2025 will be well-positioned; those who treat August 2026 as the starting date for their compliance work will not.
AI systems that are safety components of products subject to existing EU product safety legislation (medical devices, in vitro diagnostics, machinery, radio equipment) benefit from extended transition periods. These transitions are tied to the revision cycles of the applicable product legislation and generally expire in 2027 or 2028. Organisations in healthcare, manufacturing, and transport with AI-embedded products should assess the applicable product legislation alongside the AI Act to determine the precise deadline.
As of March 2026, the prohibitions under Article 5 (prohibited AI practices) have applied since 2 February 2025. GPAI model obligations have applied since 2 August 2025. High-risk AI system obligations under Annex III apply from 2 August 2026. Compliance programmes for Annex III systems should be substantially advanced at this point, not just starting. Organisations should be actively completing risk classifications, building documentation frameworks, and preparing conformity assessments.
High-risk AI systems listed in Annex III that are already on the market before 2 August 2026 benefit from a transition period extending to 2 August 2027, provided they are not subject to significant changes in design or intended purpose. This one-year extension for existing systems does not mean compliance work should be deferred. The documentation, monitoring, and human oversight requirements must still be in place for continued operation.
The AI Office is the EU-level body within the European Commission responsible for supervising GPAI models, coordinating consistent application of the AI Act across member states, and providing guidance and technical standards. For high-risk AI systems in most Annex III sectors, national market surveillance authorities in each member state have primary supervisory jurisdiction. The AI Office has exclusive jurisdiction over GPAI model compliance and incident reporting.
Companies with Annex III high-risk AI systems should use the period to 2026 to complete their AI system inventory and risk classification, determine provider vs deployer roles for each system, conduct fundamental rights impact assessments, develop or commission technical documentation, prepare for conformity assessment (self-assessment or third-party audit), establish human oversight procedures, and set up post-market monitoring and incident reporting workflows. Companies with GPAI models in their infrastructure should verify compliance with the obligations that applied from August 2025.
