The EU AI Act entered into force on 1 August 2024 and is the world's first comprehensive legal framework specifically regulating artificial intelligence. It applies to providers that place AI systems on the EU market, deployers that use AI systems in the EU, and importers and distributors in the AI supply chain. Its reach extends to providers and deployers outside the EU whose systems are used within the EU or whose outputs affect persons within the EU.
The Act's core mechanism is risk-based regulation: different obligations attach to AI systems depending on their risk level. At one end, certain AI practices are prohibited outright. At the other, minimal-risk AI (which covers the vast majority of AI applications) carries no mandatory compliance obligations beyond the general AI literacy requirement. The substantive compliance burden falls on high-risk AI systems, which are defined by reference to their intended purpose and the sector in which they are deployed, and on providers of general-purpose AI (GPAI) models.
The AI Act's obligations fall on providers (those who develop AI systems and place them on the market or put them into service) and deployers (those who use AI systems under their own authority in a professional context). The Act explicitly exempts AI systems used for national security purposes and AI systems used for personal non-professional activities. Research and development activities are not fully exempt: AI systems developed and tested under research conditions may still be subject to the Act's requirements if they are made available to users in the EU.
For businesses, the practical question is whether their AI systems are high-risk under the Act's Annex III classification or qualify as GPAI models. These are the systems that require substantial compliance investment. Businesses that use only minimal-risk AI tools (spam filters, recommendation engines, productivity tools) have limited mandatory obligations, though the AI literacy requirement under Article 4 applies across all risk levels.
For high-risk AI systems, the Act requires a comprehensive set of measures. Providers must establish a risk management system, maintain technical documentation, implement quality management procedures, design the system for human oversight, ensure accuracy and robustness, register the system in the EU AI database, and complete a conformity assessment before market placement. Deployers must implement the provider's instructions, conduct a fundamental rights impact assessment in certain contexts, ensure human oversight, monitor the system's performance, and report serious incidents to the competent authority.
These requirements are not a compliance exercise to be completed once and set aside; they are ongoing operational obligations. The risk management system must operate continuously throughout the system's lifecycle. Technical documentation must reflect the current state of the system. Post-market monitoring must detect emerging issues. The compliance framework is designed to function as a living governance system, not a static certification.
Providers of general-purpose AI models (which include large language models, multimodal foundation models, and other models capable of performing a wide range of tasks) have their own compliance framework under Chapter V of the Act. These obligations have applied since 2 August 2025 and include maintaining and providing technical documentation, complying with EU copyright law in relation to training data, publishing a summary of training data for public disclosure, and implementing a policy to respect rights-holders' opt-out under the Copyright in the Digital Single Market Directive.
GPAI models that present systemic risk (those trained on compute above 10^25 FLOPs, or those designated by the Commission) face additional obligations: adversarial testing before release, incident reporting to the AI Office, cybersecurity measures, and energy efficiency reporting. The AI Office within the European Commission is the primary supervisory authority for GPAI model compliance.
Yes, if the AI system is placed on the EU market or its output is used in the EU. A provider established in the US, India, or any other non-EU country that offers an AI system to EU customers is subject to the AI Act in the same way as an EU-based provider. The Act follows the market access model of other EU product legislation: market access to the EU requires compliance, regardless of where the provider is established. Non-EU providers must designate an authorised representative in the EU.
The AI Act and the GDPR operate as concurrent frameworks for AI systems that process personal data, which covers the majority of commercially deployed AI. GDPR compliance does not satisfy AI Act requirements, and AI Act compliance does not substitute for GDPR compliance. Both must be addressed, and in several areas (risk assessments, transparency, automated decision-making) the two frameworks impose overlapping but distinct obligations. Organisations should assess both frameworks together rather than treating them as sequential compliance tasks.
The AI Act has a tiered fine structure. Violations of the prohibited AI practices provisions attract fines of up to EUR 35 million or 7% of global annual turnover. Violations of other substantive requirements (including high-risk AI system requirements and GPAI model obligations) attract fines of up to EUR 15 million or 3% of global annual turnover. Providing incorrect or misleading information to authorities attracts fines of up to EUR 7.5 million or 1% of global annual turnover. For SMEs and start-ups, the applicable fine is the lower of the absolute amount or the turnover-based percentage.
For prohibited practices, the deadline has passed: Article 5 has applied since 2 February 2025. For GPAI model obligations, the deadline has also passed: Chapter V has applied since 2 August 2025. For high-risk AI systems under Annex III, the compliance deadline is 2 August 2026. Given the time required to conduct risk classifications, develop technical documentation, and complete conformity assessments, organisations with Annex III systems that have not begun their compliance work are already behind schedule.
