Most AI systems process personal data. A recruitment screening tool processes candidate CVs. A credit scoring model processes financial data. A customer service chatbot processes queries that contain personal information. An AI analytics platform processes user behaviour data. When an AI system processes personal data, both the AI Act and the GDPR apply simultaneously.
The AI Act does not replace the GDPR. It sits alongside it, adding AI-specific requirements on top of the existing data protection framework. This means that a provider or deployer of an AI system that processes personal data must comply with both regulations — and must understand where they overlap, where they diverge, and where they create complementary obligations.
The good news is that compliance with one framework often supports compliance with the other. Many of the AI Act’s requirements — transparency, risk management, data quality, documentation — align with GDPR principles. The challenge is coordination: ensuring that your compliance activities address both frameworks efficiently rather than creating parallel workstreams that duplicate effort.
One of the most significant areas of overlap is impact assessment. The GDPR requires a Data Protection Impact Assessment (DPIA) under Article 35 when processing is likely to result in a high risk to the rights and freedoms of natural persons — which explicitly includes systematic and extensive profiling with legal or similarly significant effects, and processing on a large scale of special categories of data. The AI Act requires high-risk AI system providers to maintain a risk management system (Article 9) and requires certain deployers to conduct a fundamental rights impact assessment (Article 27).
In practice, these assessments cover overlapping territory. A DPIA for an AI-powered recruitment tool would assess the data protection risks of processing candidate data. An AI Act risk assessment for the same system would assess the risks to fundamental rights and safety. A fundamental rights impact assessment (required for deployers of high-risk AI in areas like employment, credit scoring, and essential services) would assess the impact on the rights of affected individuals.
Rather than conducting three separate assessments, the most efficient approach is a combined assessment that addresses all three frameworks in a single exercise. The assessment should cover the data protection dimensions (legal basis, necessity, proportionality, data subject rights, security measures — as required by the GDPR), the AI-specific dimensions (accuracy, bias, robustness, human oversight, transparency — as required by the AI Act), and the fundamental rights dimensions (impact on equality, non-discrimination, privacy, freedom of expression, and other rights — as required by the AI Act’s fundamental rights impact assessment).
Article 22 of the GDPR gives individuals the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning them or similarly significantly affects them. This right is directly relevant to AI systems that make or substantially influence decisions about people.
The AI Act’s transparency requirements complement Article 22 but operate differently. Under the AI Act, deployers of high-risk AI systems must inform natural persons that they are subject to a high-risk AI system (Article 26(11)). Under the GDPR, where automated decision-making takes place, the controller must provide meaningful information about the logic involved, as well as the significance and envisaged consequences of such processing (Articles 13(2)(f) and 14(2)(g)).
The practical implication is that when you deploy an AI system that makes or influences significant decisions about people, you need to consider both sets of requirements. The GDPR requires that you inform data subjects about the automated processing and give them the right to contest the decision and obtain human intervention. The AI Act requires that the system is designed for effective human oversight, that deployers ensure meaningful human review is possible, and that affected persons are informed.
Where an AI system is a high-risk system under the AI Act and also involves automated decision-making under Article 22 GDPR, the combined obligation is clear: the system must be transparent about its role, humans must be able to meaningfully oversee and intervene, and affected individuals must be informed and empowered to contest decisions.
Both frameworks impose data quality requirements, though with different emphases.
The GDPR’s accuracy principle (Article 5(1)(d)) requires that personal data be accurate and, where necessary, kept up to date. The AI Act’s data governance requirements (Article 10) go further in the AI context, requiring that training, validation, and testing datasets be relevant, sufficiently representative, and to the best extent possible free of errors and complete. The AI Act also requires that data governance take into account the specific geographical, contextual, behavioural, or functional setting within which the AI system is intended to be used.
For AI systems that process personal data, these requirements interact. The GDPR governs how personal data is collected, stored, and used — including for AI training. The AI Act governs the quality standards that training data must meet. The data minimisation principle under the GDPR (collect only what is necessary) can create tension with the AI Act’s requirement for representative and complete datasets. Navigating this tension requires careful attention to the legal basis for processing, the scope of data collection, and the techniques used to ensure data quality without collecting more personal data than necessary.
A particularly nuanced area of overlap concerns special category data under the GDPR (Article 9) — data revealing racial or ethnic origin, political opinions, religious beliefs, health data, and similar sensitive categories. Processing this data is generally prohibited under the GDPR unless a specific exception applies.
The AI Act, however, recognises that testing AI systems for bias may require processing special category data. Article 10(5) provides that providers of high-risk AI systems may process special categories of personal data to the extent that it is strictly necessary for the purpose of ensuring bias detection and correction. This is subject to appropriate safeguards, including technical limitations on re-use, security measures, and data minimisation.
This provision creates a carefully bounded exception that allows bias testing while maintaining data protection safeguards. Providers must ensure that any processing of special category data for bias testing complies with both the AI Act’s requirements and the GDPR’s safeguards, and must document the necessity and proportionality of the processing.
Both frameworks require transparency, but the obligations operate at different levels.
The GDPR requires transparency about data processing: what data is collected, for what purpose, on what legal basis, and what rights the data subject has. When automated decision-making is involved, the GDPR additionally requires meaningful information about the logic involved.
The AI Act requires transparency about the AI system itself: deployers must inform affected persons that they are subject to a high-risk AI system, providers must supply deployers with comprehensive instructions for use, and limited-risk systems (chatbots, deepfake generators) must disclose their AI nature to users.
In practice, both sets of transparency obligations should be addressed in your privacy notices, terms of use, and user-facing communications. A privacy notice for a service that uses AI-powered decision-making should explain both the data processing (GDPR) and the AI system’s role (AI Act).
The GDPR gives individuals a suite of rights — access, rectification, erasure, restriction, portability, and objection — that apply to personal data processed by AI systems. The right to erasure, in particular, raises complex questions for AI: if an individual’s data was used to train a model, does erasure require retraining the model without that data?
The AI Act does not directly address individual rights of this nature — it defers to the GDPR on data subject rights. But the AI Act’s requirements for record-keeping and logging may facilitate the exercise of GDPR rights by creating audit trails that make it easier to identify what data was processed and how it influenced outcomes.
The most effective compliance strategy treats the AI Act and GDPR as complementary rather than competing frameworks. Practical steps include appointing a single team or individual responsible for coordinating AI Act and GDPR compliance for AI systems, conducting combined impact assessments that address both data protection and AI-specific risks, maintaining unified documentation that covers both technical documentation (AI Act) and records of processing activities (GDPR), aligning transparency communications to address both frameworks in a single user-facing notice, and coordinating incident response procedures to ensure that both AI Act serious incident reporting and GDPR data breach notification obligations are met.
If you need to coordinate your AI Act and GDPR compliance or conduct a combined impact assessment, get in touch or schedule a meeting with our team.
