The EU AI Act does not impose uniform obligations on all AI systems. Instead, it establishes a risk-based framework that categorises AI systems into four tiers: prohibited, high-risk, limited-risk, and minimal-risk. The tier into which your AI system falls determines what obligations apply and whether the system can be placed on the EU market at all.
Getting the classification right is the essential first step in any AI Act compliance programme. An organisation that misclassifies a high-risk system as minimal-risk has no conformity assessment, no technical documentation, no human oversight mechanisms, and no registration in the EU AI database. That amounts to a comprehensive compliance failure that can result in market exclusion and significant fines. Conversely, an organisation that over-classifies a minimal-risk system wastes resources on unnecessary compliance measures. Accurate classification is not a formality: it is the gateway to the rest of the compliance framework.
Article 5 of the AI Act prohibits a set of AI practices that the legislature has determined pose unacceptable risks to fundamental rights and EU values. These prohibitions have applied since 2 February 2025. The prohibited practices include AI systems that deploy subliminal techniques beyond a person's consciousness to distort their behaviour in a harmful way; AI systems that exploit the vulnerabilities of specific groups (children, disabled persons, economically vulnerable individuals) to distort their behaviour; social scoring systems operated by public authorities; real-time remote biometric identification systems in public spaces by law enforcement (with narrow exceptions); AI systems that infer emotions in workplace or educational contexts (with narrow exceptions); AI systems that create or expand facial recognition databases through untargeted scraping; and AI systems that make individual criminal risk assessments based on profiling.
Placing a prohibited AI system on the market or using it is a serious violation of the AI Act, subject to the highest fine tier: up to EUR 35 million or 7% of global annual turnover.
High-risk AI systems are subject to the AI Act's full compliance framework. They are identified in two ways. First, Annex I lists AI systems that are safety components of products regulated under existing EU product safety legislation, covering machinery, medical devices, in vitro diagnostics, radio equipment, civil aviation, motor vehicles, marine equipment, and railway systems. AI components in these products are high-risk if the product must undergo third-party conformity assessment under the applicable product legislation.
Second, Annex III lists standalone AI systems in eight sectors that are classified as high-risk regardless of whether they are embedded in a regulated product: biometric identification and categorisation of natural persons; management and operation of critical infrastructure; education and vocational training; employment and worker management; access to essential private services and benefits; law enforcement; migration and border control; and administration of justice and democratic processes. Within each of these sectors, not every AI application is high-risk. The specific use case must fall within the description in Annex III.
Limited-risk AI systems are not subject to the full high-risk compliance framework, but are subject to specific transparency obligations. The primary example is AI systems that interact with humans (chatbots, virtual assistants) and AI systems that generate synthetic content (deepfakes, AI-generated text or images). Operators of these systems must ensure that users are informed they are interacting with an AI and that AI-generated content is identifiable as such. Emotion recognition and biometric categorisation systems that are not high-risk are also subject to transparency requirements.
Minimal-risk AI systems, which include spam filters, AI-enabled video games, and the great majority of AI applications in commercial use, are subject to no mandatory obligations under the AI Act other than the general AI literacy obligation under Article 4. Providers and deployers of minimal-risk AI may voluntarily apply the codes of practice being developed under the Act.
Start by identifying the primary intended use case of your AI system. Then map it against the eight sector categories in Annex III and the specific descriptions within each category. The Commission has published guidance on the application of Annex III, and the AI Office is developing further guidance. The test is the system's intended purpose, meaning how the provider describes and designs the system's use, not any particular deployment. If the system is marketed or designed for a purpose that falls within an Annex III description, it is high-risk, even if it is also used for lower-risk purposes.
No. The risk classification attaches to the AI system based on its intended purpose, not its actual deployment. A provider that places an Annex III AI system on the market cannot change its risk classification by restricting how deployers use it in their contracts. The obligations follow the system's design and intended purpose as defined by the provider. If a deployer uses the system in a way that substantially changes its purpose or performance profile, the deployer may become a provider of a new high-risk system under Article 25(1).
General-purpose AI (GPAI) models, including large language models, multimodal models, and other foundation models that can be used for a wide range of tasks, are subject to a separate regime under Chapter V of the AI Act, distinct from the high-risk classification framework. GPAI model providers must maintain technical documentation, comply with copyright law, and publish training data summaries. GPAI models with systemic risk face additional obligations. Where a GPAI model is integrated into an AI system that falls within Annex III, the system-level high-risk requirements apply to the integrated system, and the provider of the system (which may not be the GPAI model provider) is responsible for compliance.
Yes, but with a transitional period. High-risk AI systems listed in Annex III that were already on the market or in service before 2 August 2026 must comply with the Act's requirements by 2 August 2027, provided they have not been substantially modified in the interim. This gives operators a limited additional window to bring legacy systems into compliance, but it does not eliminate the obligation. The risk classification and the resulting compliance requirements apply to all AI systems that meet the definition, regardless of when they were developed.
