The EU AI Act assigns different obligations to different actors in the AI value chain. The two most important roles for businesses are provider and deployer. The provider bears the primary regulatory burden — designing the system to be compliant, conducting conformity assessments, and maintaining documentation. The deployer bears a secondary but still significant set of obligations — using the system as intended, maintaining human oversight, and monitoring operation.
Understanding which role (or roles) your business occupies is essential for determining what you are required to do. The obligations are complementary but distinct: a provider cannot discharge its responsibilities by delegating them to the deployer, and a deployer cannot assume that compliance is entirely the provider’s problem.
Under Article 3(3), a provider is a natural or legal person, public authority, agency, or other body that develops an AI system or a general-purpose AI model, or that has an AI system or a general-purpose AI model developed on its behalf, and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge.
The key elements are development (or commissioning development) and placing on the market or putting into service under your own name. If you build an AI tool and offer it to clients, you are a provider. If you commission a software house to build an AI tool that you then sell under your brand, you are still the provider — the entity that places it on the market under its name bears the provider obligations.
Importantly, if you develop an AI system purely for internal use and put it into service within your organisation, you are also a provider. The AI Act does not distinguish between AI systems sold commercially and those deployed internally — both constitute putting into service.
Under Article 3(4), a deployer is a natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.
If you use an AI tool in your business operations — whether it is a third-party SaaS product, an API integration, or a licensed software solution — you are a deployer. The breadth of this definition means that most businesses that use any AI-powered tools are deployers, even if they do not think of themselves as being in the AI industry.
Many businesses occupy both roles simultaneously. A technology company that develops an AI product for clients (provider) while also using third-party AI tools for internal HR screening, customer analytics, or operations management (deployer) faces obligations in both capacities. The provider obligations apply to the systems it develops. The deployer obligations apply to the systems it uses.
This dual role is increasingly common. Businesses that integrate AI APIs into their own products may be deployers of the underlying model but providers of the combined system that they offer to their customers. The classification depends on who places the final product on the market under their name.
Providers of high-risk AI systems bear the most extensive obligations under the AI Act. These include establishing and maintaining a risk management system that operates throughout the system’s lifecycle; ensuring data governance for training, validation, and testing datasets; preparing and maintaining comprehensive technical documentation; designing the system with automatic event logging capabilities; providing deployers with clear, comprehensive instructions for use; designing the system to enable effective human oversight; ensuring appropriate levels of accuracy, robustness, and cybersecurity; conducting a conformity assessment before placing the system on the market; affixing the CE marking to indicate conformity; registering the system in the EU database; establishing a post-market monitoring system; and reporting serious incidents to the relevant authorities.
The provider must also have a quality management system in place that covers the full lifecycle of the AI system, from design through deployment and post-market monitoring.
Deployers of high-risk AI systems have their own set of obligations, distinct from but complementary to the provider’s. These include using the system in accordance with the instructions for use provided by the provider; assigning human oversight to natural persons who have the necessary competence, training, and authority; monitoring the operation of the system and informing the provider or distributor if they believe the system presents a risk; keeping logs automatically generated by the system for the period specified by the provider or required by law; conducting a fundamental rights impact assessment before putting certain high-risk systems into use (required for deployers that are bodies governed by public law, private entities providing public services, and deployers of certain AI systems such as credit scoring and insurance risk assessment); informing natural persons that they are subject to a high-risk AI system; and cooperating with competent authorities.
Deployers are also responsible for ensuring that input data is relevant and sufficiently representative for the system’s intended purpose. This means that a deployer cannot simply feed any data into a high-risk AI system without considering whether that data is appropriate.
Article 25 establishes circumstances in which a deployer effectively becomes a provider and assumes provider obligations. This happens when the deployer places its own name or trademark on a high-risk AI system already on the market (rebranding), makes a substantial modification to a high-risk AI system, or modifies the intended purpose of an AI system in a way that makes it high-risk.
This provision is important for businesses that customise, fine-tune, or significantly adapt AI systems obtained from third parties. If your modifications are substantial enough to change the system’s risk profile or intended purpose, you become the provider of the modified system and assume full provider obligations — including conformity assessment and registration.
The boundary between routine configuration (which does not trigger provider status) and substantial modification (which does) is one of the more nuanced aspects of the AI Act. The Commission is expected to provide further guidance on this distinction, but in the meantime, businesses that significantly modify AI systems should assess whether they have crossed the line into provider territory.
The AI Act recognises that AI systems are often delivered through multi-party supply chains. A GPAI model might be developed by one company, integrated into an application by a second company, and deployed by a third. Each party in the chain has its own obligations.
The provider of the GPAI model must comply with the Chapter V transparency and documentation requirements. The downstream provider — the company that integrates the model into a high-risk AI system — bears the full set of Chapter III provider obligations for the resulting system. The deployer bears the deployer obligations.
Contractual arrangements between parties in the supply chain should clearly allocate responsibilities, specify information flows, and establish cooperation mechanisms. The AI Act requires the GPAI provider to give downstream providers sufficient information to comply with their own obligations. Deployers, in turn, need sufficient information from their providers to fulfil their monitoring, logging, and transparency obligations.
The AI Act also imposes obligations on importers (entities that place AI systems from third countries on the EU market) and distributors (entities that make AI systems available on the market without affecting their properties). Importers must verify that the provider has conducted the conformity assessment, that the system bears the CE marking, and that the required documentation accompanies the system. Distributors must verify that the system bears the CE marking and is accompanied by the required documentation and instructions for use.
These roles matter primarily for hardware products with embedded AI and for businesses that act as resellers or distribution channels for AI systems developed outside the EU.
The provider-deployer distinction has direct implications for how AI procurement contracts should be structured. If you are a deployer procuring a high-risk AI system, your contract with the provider should address the provider’s representations regarding conformity assessment and CE marking; access to technical documentation and instructions for use; the provider’s post-market monitoring obligations and how incidents will be communicated; data governance responsibilities (who ensures training data quality, who controls input data); access to system logs; allocation of liability for AI Act non-compliance; cooperation obligations in the event of regulatory investigation; and provisions for updates, modifications, and version management.
If you are a provider selling to deployers, your terms of service or licence agreements should clearly define the intended purpose and conditions of use, specify what the deployer must do to maintain compliance, and limit the scope of use to what has been assessed in the conformity assessment.
If you need to assess your role under the AI Act or structure your AI procurement contracts, get in touch or schedule a meeting with our team.
