Search

Why Classification Matters

The entire regulatory burden of the EU AI Act depends on a single determination: which risk tier does your AI system fall into? A minimal-risk system faces no mandatory obligations. A high-risk system faces extensive requirements for risk management, data governance, documentation, transparency, human oversight, accuracy, robustness, and cybersecurity. A prohibited system cannot be operated at all.

Getting the classification right is therefore the most important step in AI Act compliance. Misclassifying a high-risk system as minimal risk creates regulatory exposure. Over-classifying a minimal-risk system as high-risk creates unnecessary compliance costs. This article explains how the classification framework works and how to apply it to your own AI systems.

The Four Tiers

The AI Act establishes four risk categories, each with a different regulatory regime.

Prohibited. AI practices that are deemed to pose an unacceptable risk to fundamental rights or safety. These are banned. No compliance pathway exists — these systems simply cannot be operated in the EU.

High-risk. AI systems that pose significant risks but can be operated if they meet a comprehensive set of regulatory requirements. These systems must be registered, documented, tested, and subject to ongoing oversight.

Limited risk. AI systems that pose manageable risks, primarily transparency-related. These systems must inform users about their nature (for example, that they are interacting with a chatbot or viewing AI-generated content) but do not face the full compliance burden of high-risk systems.

Minimal risk. AI systems that pose negligible risks. No mandatory obligations apply, though voluntary codes of conduct are encouraged.

Prohibited AI Practices (Article 5)

The AI Act bans a narrow set of AI applications outright. These prohibitions have been in effect since 2 February 2025.

Subliminal manipulation. AI systems that deploy subliminal techniques beyond a person’s consciousness, or purposefully manipulative or deceptive techniques, to materially distort behaviour in a way that causes or is reasonably likely to cause significant harm.

Exploitation of vulnerabilities. AI systems that exploit vulnerabilities of specific groups due to age, disability, or social or economic situation, to materially distort their behaviour in a way that causes significant harm.

Social scoring. AI systems used by public authorities (or on their behalf) to evaluate or classify people based on social behaviour or personal characteristics, where the resulting score leads to detrimental or unfavourable treatment that is unjustified or disproportionate.

Real-time remote biometric identification in public spaces. AI systems for real-time remote biometric identification in publicly accessible spaces for law enforcement, subject to narrow exceptions for specific serious crimes, missing persons, and imminent terrorist threats.

Facial recognition database scraping. AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.

Emotion recognition in workplaces and education. AI systems that infer emotions in the workplace or educational institutions, except where use is for medical or safety reasons.

Biometric categorisation of sensitive attributes. AI systems that categorise individuals based on biometric data to deduce race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation, except for certain law enforcement uses with appropriate safeguards.

Individual predictive policing. AI systems that make risk assessments of individuals for the purpose of predicting whether they will commit a criminal offence, based solely on profiling or personality traits.

The prohibited practices are narrowly defined. Most businesses will not be operating these types of systems. But the prohibitions merit a screening exercise — particularly for businesses that use AI for employee monitoring, customer profiling, or behavioural analysis.

High-Risk AI Systems (Article 6 and Annex III)

The high-risk category is where most of the regulatory burden concentrates. An AI system is classified as high-risk through one of two routes.

Route 1: AI systems that are safety components of regulated products (Article 6(1)). If an AI system is intended to be used as a safety component of a product that falls under EU harmonisation legislation listed in Annex I (such as medical devices, machinery, toys, radio equipment, civil aviation, vehicles, and elevators), and the product is required to undergo a third-party conformity assessment, then the AI system is high-risk. This route captures AI embedded in physical products that are already regulated for safety.

Route 2: Standalone AI systems in Annex III use cases (Article 6(2)). Annex III lists specific use cases that are classified as high-risk. These are grouped into eight areas.

Biometrics: AI for remote biometric identification, biometric categorisation, and emotion recognition (where not prohibited). Critical infrastructure: AI used as safety components for the management and operation of critical digital infrastructure, road traffic, and utilities. Education: AI for determining access to or assignment in educational institutions, evaluating learning outcomes, assessing appropriate levels of education, and monitoring prohibited behaviour during exams. Employment: AI for recruitment and selection, decisions on employment terms, task allocation based on personal traits, and monitoring or evaluation of work performance. Essential services: AI for assessing eligibility for public benefits and services, credit scoring, risk assessment for life and health insurance, evaluating and classifying emergency calls, and assessing reliability of evidence in investigations. Law enforcement: AI for risk assessment of individuals (except predictive policing, which is prohibited), polygraphs, evaluating evidence reliability, and profiling during crime detection. Migration: AI for risk assessment of migrants, examining applications for asylum and visas, and polygraphs. Administration of justice: AI for researching and interpreting facts and law and applying the law to concrete facts.

The Exception for Low-Risk Use Within High-Risk Categories

Not every AI system that falls within an Annex III use case is automatically high-risk. Article 6(3) provides an important exception: an AI system listed in Annex III is not considered high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights, including by not materially influencing the outcome of decision-making.

Specifically, an AI system is not high-risk if it is intended to perform a narrow procedural task, is intended to improve the result of a previously completed human activity, is intended to detect decision-making patterns without replacing or influencing previously completed human assessment, or is intended to perform a preparatory task to an assessment relevant to the Annex III use cases.

This exception is important because it means that simple AI tools used in high-risk domains — for example, an AI spell-checker used in a law firm, or an AI scheduling tool used in an HR department — are not automatically classified as high-risk just because they operate in a listed domain. The classification depends on whether the AI system materially influences consequential decisions.

However, the exception does not apply if the AI system performs profiling of individuals. Any AI system that profiles people in the context of an Annex III use case is high-risk regardless of the exception.

How to Classify Your AI Systems

A practical classification exercise involves the following steps.

First, identify each AI system in your inventory. This includes systems you have developed (as a provider) and systems you use (as a deployer), including AI functionality embedded in third-party software.

Second, screen for prohibited practices. Is any system doing something on the Article 5 list? If yes, it must be discontinued immediately.

Third, check Annex I. Is the AI system a safety component of a product covered by the EU harmonisation legislation listed in Annex I? If yes and the product requires third-party conformity assessment, the AI system is high-risk.

Fourth, check Annex III. Does the AI system fall within one of the Annex III use cases? If yes, assess whether the Article 6(3) exception applies — does the system materially influence decision-making, or is it performing a narrow procedural or preparatory task?

Fifth, assess transparency obligations. Does the AI system interact directly with people, generate synthetic content, or perform emotion recognition or biometric categorisation? If yes, it faces limited-risk transparency obligations regardless of whether it is also high-risk.

If the system does not fall into any of the above categories, it is minimal risk and faces no mandatory obligations.

The Obligations by Tier

High-risk systems must comply with requirements spanning risk management (establishing and maintaining a risk management system throughout the AI system’s lifecycle), data governance (ensuring training, validation, and testing data are relevant, representative, and free from errors), technical documentation (maintaining detailed documentation of the system’s design, development, and capabilities), record-keeping (automatic logging of events during operation), transparency (providing clear instructions for use to deployers), human oversight (designing the system so that humans can effectively oversee its operation), accuracy and robustness (achieving appropriate levels of accuracy and resilience to errors), and cybersecurity (protecting against unauthorised access and manipulation). Providers must also conduct conformity assessments before placing the system on the market and register the system in the EU database.

Limited-risk systems must ensure users are informed that they are interacting with an AI system (for chatbots and conversational AI), that content has been artificially generated or manipulated (for deepfakes and synthetic media), and that emotion recognition or biometric categorisation is being performed (where applicable).

Minimal-risk systems face no mandatory obligations.

Common Classification Questions

Is a customer service chatbot high-risk? Generally no. A chatbot that answers customer queries is a limited-risk system (transparency obligation to disclose AI interaction) but is not high-risk unless it makes consequential decisions about essential services.

Is an AI recruitment screening tool high-risk? Almost certainly yes. Recruitment and selection is expressly listed in Annex III, and a screening tool that filters candidates materially influences hiring decisions.

Is an AI tool that summarises documents high-risk? Generally no. A summarisation tool performs a preparatory task and does not materially influence decision-making. The Article 6(3) exception would typically apply even if the tool is used in a high-risk domain.

Is a credit scoring model high-risk? Yes. Credit scoring and assessment of creditworthiness is expressly listed in Annex III.

If you need to classify your AI systems or assess your regulatory obligations, get in touch or schedule a meeting with our team.

Bart Lieben
Attorney-at-Law
key takeaways

More related articles

Pitch Chatbot
Contact us right away
Pitch Chatbot
Hi there,
How can we help you today?
Start Whatsapp Chat
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info
No items found.