The EU AI Act organises its compliance obligations around two central roles: provider and deployer. The distinction is not academic; it determines which obligations apply to your organisation for each AI system in your operations.
Providers bear the heaviest burden: they are responsible for the AI system's design, technical documentation, conformity assessment, and registration before the system reaches the market. Deployers operate systems provided by others but carry their own obligations regarding use, monitoring, and transparency to affected individuals. Getting the classification wrong (either underestimating your obligations as a provider or failing to recognise your deployer responsibilities) is a compliance failure with regulatory consequences.
Most organisations operate AI systems in both capacities simultaneously. A financial services firm that builds a proprietary credit-scoring model is a provider of that system; when it also uses a third-party AI tool for document processing, it is a deployer of that system. The obligations that apply to the firm differ between these two systems, and the internal compliance framework must reflect both sets of requirements rather than applying a single model across the board.
A provider is defined under Article 3(3) of the AI Act as a natural or legal person, public authority, agency, or other body that develops an AI system or GPAI model, or has it developed, and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge.
Three elements of this definition deserve attention. First, the provider does not need to have built the system themselves; an organisation that has an AI system developed on its behalf (by a technology contractor, an in-house team, or a customisation of an open-source model) and then places it on the market or deploys it under its own name is a provider, not merely a commissioning party. Second, the provider role attaches at the point of placing on the market or putting into service; organisations that develop AI for internal use only (without placing it on any external market) are also providers with the same obligations for high-risk systems. Third, the definition covers both commercial products and free tools: providing an AI system for no charge does not reduce the provider obligations.
A deployer is defined under Article 3(4) as a natural or legal person, public authority, agency, or other body that uses an AI system under its own authority, except where the AI system is used in the course of a personal non-professional activity. The deployer uses the system but does not place it on the market; they receive it from a provider and put it to work in their own operations or for their own customers.
Deployers of high-risk AI systems have their own set of obligations under the Act. These include implementing the provider's instructions for use, conducting a fundamental rights impact assessment before deploying a high-risk system in certain contexts, ensuring appropriate human oversight, suspending use if a risk is identified, informing the relevant market surveillance authority of serious incidents, and providing transparency to individuals affected by decisions made with the system's assistance. For employers, there is an additional requirement: workers and workers' representatives must be informed when high-risk AI systems are deployed in employment contexts.
It is common for a single organisation to be both provider and deployer of the same system. An organisation that develops a high-risk AI system for internal use (a custom model for HR decisions, a proprietary fraud detection system, a risk scoring tool) is simultaneously a provider (it developed the system) and a deployer (it uses it under its own authority). In this case, all provider obligations and all deployer obligations apply to the same organisation for the same system. There is no derogation for internal use.
Organisations also become providers when they substantially modify a high-risk AI system obtained from a third-party provider. Substantial modification (defined in Article 3(23) as a change that affects the system's compliance with the AI Act's requirements or its performance and risk profile) converts the modifying organisation into a provider with full provider obligations for the modified system, even if the underlying system was originally compliant in the hands of the original provider.
Providers of high-risk AI systems must establish a risk management system, ensure training data meets quality criteria, maintain technical documentation covering the system's design, capabilities, and limitations, design systems to enable logging of operations, ensure transparency and provide instructions for use to deployers, design for human oversight, ensure accuracy, robustness, and cybersecurity, undergo a conformity assessment before market placement, register the system in the EU AI database, affix CE marking, and establish a post-market monitoring plan. For systems assessed by a notified body, the third-party assessment must be completed before placement on the market.
Deployers of high-risk AI systems must implement the provider's instructions for use, designate a responsible person with appropriate AI literacy, conduct a fundamental rights impact assessment (for deployers that are bodies governed by public law, or deployers using a system for credit scoring, insurance, or recruitment), inform affected workers and their representatives when AI systems are deployed in employment contexts, ensure human oversight is in place, monitor the system's performance, suspend use if a risk is identified, and report serious incidents to the market surveillance authority.
Not necessarily, but potentially yes. A company that calls an AI API and presents the results directly to end users without modification is typically a deployer of the underlying model, not a provider. However, if the company builds a product or service on top of the API (wrapping it in a user interface, adding additional logic, or integrating it into a decision-making workflow) it may become a provider of that downstream system, with obligations that apply alongside (not instead of) those of the underlying model provider. The analysis depends on whether the company's downstream product constitutes an AI system in its own right under the Act's definition.
Placing a high-risk AI system on the market or putting it into service without completing the required conformity assessment is a violation of Article 16 of the AI Act and is subject to fines of up to EUR 15 million or 3% of global annual turnover, whichever is higher. Market surveillance authorities can require the system to be withdrawn from the market. Where personal data was processed as part of the non-compliant deployment, there may be concurrent GDPR violations, attracting separate supervisory action from the competent data protection authority.
