Search

Why Employment AI Is High-Risk

The EU AI Act classifies AI systems used in the employment context as high-risk because decisions about employment — who gets hired, who gets promoted, how work is allocated, how performance is evaluated, and who gets terminated — have direct and significant consequences for people’s livelihoods, dignity, and fundamental rights.

This classification is not theoretical. AI tools are already widely used in recruitment (CV screening, candidate ranking, video interview analysis), workforce management (task allocation, scheduling, productivity monitoring), performance evaluation (automated performance scoring, behaviour pattern analysis), and employee development (personalised training recommendations, career path modelling). Each of these applications, to the extent that it materially influences employment decisions, falls within the AI Act’s high-risk category.

For employers, this means that the AI tools used in HR and recruitment are among the first to require compliance with the AI Act’s most demanding requirements — and the August 2026 deadline for high-risk system obligations is approaching.

What the AI Act Covers

Annex III of the AI Act lists the following employment-related use cases as high-risk:

Recruitment and selection. AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates. This captures the full spectrum of AI-assisted hiring: from systems that write and target job postings, through tools that screen and rank applications, to platforms that assess candidates through automated interviews, psychometric testing, or skills evaluation.

Decisions affecting terms of employment. AI systems intended to be used to make decisions affecting the terms of work-related relationships, including promotion and termination. If an AI system influences decisions about who gets promoted, who gets a pay rise, or who is selected for redundancy, it is high-risk.

Task allocation. AI systems intended to be used to allocate tasks based on individual behaviour, personal traits, or characteristics. This covers AI-driven scheduling and task distribution systems that take into account personal attributes rather than purely operational factors.

Performance monitoring and evaluation. AI systems intended to be used to monitor and evaluate the performance and behaviour of persons in work-related relationships. This includes AI tools that analyse employee productivity, track behaviour patterns, evaluate performance metrics, or flag deviations from expected work patterns.

Prohibited Practices in the Workplace

In addition to the high-risk classification, certain AI practices are outright prohibited in the employment context.

Emotion recognition. The AI Act prohibits AI systems that infer emotions of natural persons in the areas of the workplace and education, except where the AI system is intended to be placed on the market or put into service for medical or safety reasons. This means that AI tools that claim to detect employee emotions through facial analysis, voice tone analysis, or behavioural patterns — whether for engagement monitoring, stress detection, or productivity assessment — are banned in the workplace unless they qualify under the narrow medical or safety exception.

This prohibition has been in effect since 2 February 2025. Employers using emotion recognition tools in the workplace should have already discontinued them.

Subliminal manipulation and exploitation of vulnerabilities. AI systems that use subliminal techniques to manipulate employee behaviour, or that exploit vulnerabilities related to age, disability, or economic circumstances, are prohibited. While these prohibitions are broadly applicable and not specific to employment, they are particularly relevant in workplace contexts where power imbalances exist between employers and employees.

Obligations for Employers as Deployers

Most employers use AI tools developed by third parties rather than building their own. This makes them deployers under the AI Act, with a specific set of obligations.

Use in accordance with instructions. Employers must use high-risk AI systems in accordance with the instructions for use provided by the provider. This means not repurposing a tool designed for one function (such as skills assessment) for a different function (such as performance monitoring) without verifying that the new use falls within the system’s intended purpose and compliance scope.

Human oversight. Employers must assign human oversight of high-risk AI systems to natural persons who have the necessary competence, training, and authority to effectively oversee the system’s operation. In the recruitment context, this means ensuring that a qualified human reviews and has the authority to override AI-driven recommendations. A process where the AI ranks candidates and a human rubber-stamps the ranking does not constitute meaningful oversight.

Input data quality. Employers must ensure that input data is relevant and sufficiently representative for the system’s intended purpose. If the AI recruitment tool was trained on data from a different industry, geography, or demographic profile, the employer needs to assess whether the system’s outputs are reliable for its specific context.

Monitoring. Employers must monitor the operation of the high-risk AI system and report to the provider or distributor if they believe the system presents a risk or if they detect anomalies in its operation.

Record-keeping. Employers must keep the logs automatically generated by the system for the period specified by the provider or required by applicable law. These logs may be needed for compliance verification, incident investigation, or responding to complaints from affected individuals.

Fundamental rights impact assessment. Certain deployers — including public bodies and private entities providing public services — must conduct a fundamental rights impact assessment before deploying high-risk AI systems. Even where this obligation does not formally apply, conducting such an assessment is good practice for any employer using AI in consequential employment decisions.

Transparency to affected persons. Employers must inform individuals who are subject to high-risk AI systems. Job candidates must be told that AI is being used in the screening process. Employees must be told that AI is being used to monitor performance, allocate tasks, or evaluate behaviour. This obligation complements the GDPR’s transparency requirements for automated decision-making.

Interaction with Employment Law and GDPR

The AI Act does not operate in isolation. AI use in employment also triggers obligations under the GDPR (processing of employee and candidate personal data, automated decision-making under Article 22, data protection impact assessments), national employment law (information and consultation obligations, anti-discrimination requirements, works council rights where applicable), and the Platform Work Directive (for platform workers, additional transparency and human review requirements for algorithmic management).

In Belgium specifically, employers must also consider the information and consultation requirements under Belgian employment law, the role of works councils and trade union delegations in relation to surveillance and monitoring technologies, and the Belgian anti-discrimination legislation that applies to AI-driven hiring decisions.

The combined effect of these frameworks is that AI in employment is one of the most heavily regulated use cases. Employers need to coordinate compliance across multiple legal regimes rather than addressing each in isolation.

Practical Steps for Employers

Inventory your HR AI tools. Identify every AI-powered tool used in recruitment, workforce management, performance evaluation, and employee development. Include tools embedded in your HRIS, ATS, and workforce management platforms that you may not think of as AI.

Screen for prohibited practices. Verify that no tool performs emotion recognition in the workplace (unless it qualifies under the medical or safety exception). This should already have been completed by the February 2025 deadline.

Classify each tool. Determine which tools are high-risk under Annex III. Tools that materially influence hiring decisions, performance evaluations, promotion decisions, task allocation, or termination are almost certainly high-risk. Tools that perform purely administrative functions (scheduling meetings, managing time-off requests) without material influence on employment decisions may qualify for the Article 6(3) exception.

Assess your providers. For each high-risk tool, assess whether the provider is preparing for AI Act compliance. Request information about conformity assessment plans, technical documentation availability, and the provider’s compliance roadmap. If the provider cannot demonstrate a credible compliance plan, consider whether an alternative tool is available.

Implement human oversight. Ensure that every high-risk AI tool used in employment decisions is subject to meaningful human oversight by a person with the competence and authority to review, question, and override AI recommendations.

Update your transparency communications. Ensure that job advertisements, application forms, and employee communications disclose the use of AI where applicable. Coordinate with your privacy notices to address both AI Act and GDPR transparency requirements.

Document everything. Maintain records of your AI inventory, risk classifications, provider assessments, human oversight arrangements, and transparency measures. Documentation is essential for demonstrating compliance to regulators and for responding to complaints from affected individuals.

If you need to assess your HR AI tools against the AI Act or build compliant recruitment processes, get in touch or schedule a meeting with our team.

Bart Lieben
Attorney-at-Law
key takeaways

More related articles

Pitch Chatbot
Contact us right away
Pitch Chatbot
Hi there,
How can we help you today?
Start Whatsapp Chat
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info
No items found.