A conformity assessment is the process through which a provider of a high-risk AI system demonstrates that the system meets all the requirements set out in the AI Act before it can be placed on the market or put into service. It is the gateway between development and deployment — no high-risk AI system can legally be offered in the EU without a completed conformity assessment.
The concept is borrowed from EU product safety law, where conformity assessments have been used for decades to verify that products meet essential requirements before they can bear the CE marking. The AI Act extends this approach to AI systems classified as high-risk.
The conformity assessment results in a declaration of conformity — a formal statement by the provider that the system meets all applicable requirements — and entitles the provider to affix the CE marking. The system must also be registered in the EU database for high-risk AI systems before deployment.
The AI Act provides two routes to conformity assessment, depending on the type of high-risk AI system.
Self-assessment (internal control). For most high-risk AI systems listed in Annex III — the standalone use cases in areas like employment, credit scoring, education, and essential services — the provider can conduct the conformity assessment internally. This means the provider evaluates its own system against the regulatory requirements, documents the assessment, and issues the declaration of conformity without involving an external body.
Self-assessment does not mean rubber-stamping. The provider must genuinely verify compliance with every applicable requirement — risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity — and maintain evidence that the assessment was thorough and accurate. If a market surveillance authority later determines that the self-assessment was deficient, the provider faces penalties.
Third-party assessment (notified body). For high-risk AI systems used for biometric identification of natural persons (specifically, remote biometric identification), the assessment must be conducted by an independent notified body. Additionally, for high-risk AI systems that are safety components of regulated products under Annex I (medical devices, machinery, vehicles, etc.), the conformity assessment follows the procedures established under the relevant sector-specific legislation, which typically involves notified body involvement.
A notified body is an independent organisation that has been designated by a member state to carry out conformity assessments. These bodies must meet specific competence and impartiality requirements and are supervised by their designating authority.
Whether conducted internally or by a notified body, the conformity assessment involves verifying compliance with each of the AI Act’s requirements for high-risk systems.
Risk management system (Article 9). The provider must demonstrate that it has established and maintained a risk management system that operates throughout the AI system’s lifecycle. This includes identification and analysis of known and foreseeable risks, estimation and evaluation of risks that may emerge when the system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse, and adoption of appropriate risk management measures. The assessment verifies that risks have been identified, evaluated, and addressed.
Data governance (Article 10). For systems that are trained on data, the provider must demonstrate that training, validation, and testing datasets meet specific quality criteria: relevance, representativeness, freedom from errors, and completeness. The assessment verifies that data governance practices are in place and documented.
Technical documentation (Article 11). Comprehensive documentation must be prepared before the system is placed on the market. The documentation must include a general description of the system, detailed information about its development (design specifications, architecture, algorithms, training processes), information about data governance, a description of the risk management system, and information about the system’s performance, including accuracy metrics. The assessment verifies that the documentation exists, is complete, and accurately describes the system.
Record-keeping (Article 12). The system must be designed to automatically log events during operation. The assessment verifies that logging capabilities are implemented and that the logs capture the information needed for post-market monitoring and incident investigation.
Transparency and instructions for use (Article 13). The provider must prepare clear, comprehensive instructions for deployers. The assessment verifies that these instructions exist and cover the system’s intended purpose, capabilities, limitations, known risks, and the human oversight measures that deployers must implement.
Human oversight (Article 14). The system must be designed to allow effective human oversight. The assessment verifies that appropriate interfaces and safeguards are in place so that humans can understand the system’s outputs, intervene when necessary, and override or reverse AI-driven decisions.
Accuracy, robustness, and cybersecurity (Article 15). The system must achieve appropriate levels of accuracy, be resilient to errors and inconsistencies, and be protected against unauthorised access and manipulation. The assessment verifies that these properties have been tested and documented.
Upon successful completion of the conformity assessment, the provider draws up an EU declaration of conformity. This is a formal document — specified in Annex V of the AI Act — that includes the provider’s name and contact details, a statement that the declaration is issued under the sole responsibility of the provider, the AI system’s identification (name, type, version), a statement that the system complies with the AI Act, references to the harmonised standards or common specifications used, the place and date of issue, and the signature of the authorised person.
The declaration must be kept for ten years after the system has been placed on the market or put into service. It must be made available to national competent authorities upon request.
After drawing up the declaration of conformity, the provider affixes the CE marking to the AI system or its documentation. The CE marking signals to deployers and authorities that the system has been assessed and meets the applicable requirements.
The provider must also register the system in the EU database for high-risk AI systems before placing it on the market or putting it into service. The registration includes the provider’s details, the system description, its intended purpose, its conformity assessment status, and its risk classification. The database is publicly accessible, enhancing transparency about which high-risk AI systems are available in the EU market.
Conformity assessment is not a one-time event. The provider must establish a post-market monitoring system that actively and systematically collects, documents, and analyses data on the AI system’s performance throughout its lifecycle. If the system’s performance degrades, if new risks emerge, or if serious incidents occur, the provider must take corrective action — and in some cases must conduct a new conformity assessment to reflect material changes to the system.
Serious incidents must be reported to the relevant market surveillance authority. A serious incident is any event that directly or indirectly leads (or is likely to lead) to death, serious damage to health or property, serious and irreversible disruption of critical infrastructure, or a breach of fundamental rights obligations.
If you are a provider of a high-risk AI system, the most productive preparation steps are ensuring that technical documentation is comprehensive, current, and accurately reflects the system as deployed; verifying that a risk management system is in place and has been maintained throughout development; confirming that data governance practices are documented and that training data quality has been assessed; testing the system for accuracy, robustness, and cybersecurity and documenting the results; verifying that human oversight mechanisms are implemented and effective; preparing instructions for use that meet the AI Act’s requirements for clarity and completeness; and familiarising yourself with the EU database registration process.
The conformity assessment itself can take weeks to months depending on the complexity of the system and the completeness of existing documentation. Starting this process well before the August 2026 deadline is advisable.
If you need guidance on preparing for a conformity assessment or understanding your obligations as a provider, get in touch or schedule a meeting with our team.
