Search

The AI Act's Specific Rules for Employment AI

AI systems used in employment, worker management, and access to self-employment are explicitly listed in Annex III of the EU AI Act as high-risk AI systems.

This means that AI tools used in recruitment, selection, promotion, performance evaluation, task allocation, and monitoring of employee behaviour are subject to the AI Act's full compliance framework for high-risk systems. Given how rapidly AI has been adopted in HR functions, from CV screening and automated interview scoring to workforce analytics and productivity monitoring, this classification has immediate and widespread implications for employers across all sectors.

The classification applies regardless of the size of the organisation or whether the AI system is commercially procured or developed in-house. A large enterprise using a third-party applicant tracking system with AI-assisted screening and a start-up using a custom-built performance monitoring tool are both operating high-risk AI systems in the employment context. Both the provider (who built the system) and the deployer (the employer using it) have compliance obligations under the Act.

What Counts as High-Risk in HR

Annex III, point 4 lists the employment and worker management category in broad terms: AI systems intended to be used for recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates in the course of interviews or tests; AI systems intended to be used to make decisions on promotion and termination of work-related contractual relationships; and AI systems intended to be used to allocate tasks, monitor or evaluate performance and behaviour of persons in such contractual relationships.

This covers a significant portion of the modern HR technology stack. CV parsing tools with algorithmic ranking, video interview platforms that score candidates on facial expressions or speech patterns, workforce scheduling tools that allocate shifts based on productivity metrics, and employee monitoring software that tracks output or behaviour all fall within scope. The carve-out for systems that do not pose significant risk and perform only preparatory tasks may apply to some simple categorisation or information retrieval tools, but this carve-out requires documented assessment and cannot be assumed.

Obligations for Employers as Deployers

Employers deploying high-risk AI in HR contexts are deployers under the AI Act and have a specific set of obligations. They must implement the provider's instructions for use and not use the system for purposes other than those for which it was intended. They must ensure that adequate human oversight is in place: decisions affecting individual employees must involve meaningful human review and must not be delegated entirely to the AI system. They must provide transparency to affected individuals, meaning that employees and candidates must be informed when a high-risk AI system has been used in a decision that affects them, and must be able to obtain a meaningful explanation of the decision's logic.

For employers specifically, there is an additional notification obligation under the AI Act that does not apply in other deployment contexts: workers and their representatives must be informed before a high-risk AI system is deployed in the employment context. This requirement mirrors existing transparency and consultation obligations under national labour law and works council directives, but the AI Act introduces it as a standalone compliance requirement independent of those frameworks.

Employers covered by the scope of the AI Act that are also bodies governed by public law (government employers, public institutions, publicly funded organisations) are subject to the Fundamental Rights Impact Assessment (FRIA) requirement for high-risk AI systems. This assessment must be conducted before the system is deployed and must document the system's potential impact on fundamental rights, including non-discrimination rights, privacy, and rights of defence.

Interaction with GDPR

Employment AI almost always involves processing of personal data, and often special category data: health information used in attendance monitoring, data about trade union membership in collective bargaining contexts, or biometric data used in identity verification or access control systems. This makes GDPR compliance a concurrent and equally demanding obligation alongside the AI Act.

Article 22 GDPR is particularly relevant: it grants employees and candidates the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. Recruitment decisions, performance ratings, and dismissal decisions based on AI outputs are paradigm examples. Controllers must either have a lawful basis for Article 22 processing (consent, contract, or law), or ensure that the AI output is not the sole basis for the decision. The AI Act's human oversight requirement and Article 22's prohibition on solely automated decisions are complementary: both require that a human reviews AI outputs before consequential decisions are made.

Frequently Asked Questions

Does using an AI CV screening tool make us a high-risk AI deployer?

Almost certainly yes, unless the tool performs only a preparatory administrative function without any filtering, ranking, or scoring of candidates. An AI system that ranks CVs, filters applications against criteria, or scores candidates on any dimension is an AI system used to analyse and filter job applications within the meaning of Annex III point 4. As the employer using this system, you are a deployer of a high-risk AI system with the corresponding obligations: human oversight, transparency to candidates, worker notification, and compliance with the provider's instructions for use.

Do we need to tell candidates that AI was used in their assessment?

Yes. Deployers of high-risk AI systems must provide transparency to affected individuals. Where an AI system has been used in a recruitment or selection process that affected a candidate's outcome, the candidate must be informed. This obligation applies regardless of whether the candidate asks. It is separate from, but complementary to, the GDPR's data subject rights under Article 13/14 (information about automated decision-making) and Article 22 (right not to be subject to solely automated decisions).

What does meaningful human oversight look like in practice?

Meaningful human oversight means that the person making the employment decision has genuinely reviewed the AI output, understands what the system assessed and how, and exercises independent judgement rather than rubber-stamping the AI's recommendation. It does not mean that a human simply countersigns an AI-generated shortlist without reviewing the underlying evidence. Governance frameworks should document the oversight procedure, define who conducts it, and create an audit trail showing that oversight occurred before consequential decisions were made.

Can we use AI to monitor employee productivity?

Workforce performance monitoring systems are within the high-risk AI category if they evaluate the performance or behaviour of persons in employment relationships. Using such a system requires compliance with both the AI Act (human oversight, transparency, worker notification) and the GDPR (lawful basis for monitoring, data minimisation, transparency, and limits on automated decision-making with significant effects on employees). National labour law and works council regulations will impose additional constraints in many jurisdictions. Legal assessment before deployment is essential.

Bart Lieben
Attorney-at-Law
key takeaways
WhatsApp messaging icon for live chat support
Pitch Chatbot
Contact us right away
Pitch Chatbot
Hi there,
How can we help you today?
Start Whatsapp Chat
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info
No items found.