Any organisation that provides AI services to customers, or procures AI services from vendors, needs contractual provisions that address AI-specific risks, obligations, and liability allocation.
Standard software licensing clauses and generic data processing agreements are not designed for AI systems. They do not address the AI Act's division of responsibilities between providers and deployers, do not allocate responsibility for conformity assessments and technical documentation, do not address post-market monitoring obligations, and do not govern the use of training data or the intellectual property of AI-generated outputs.
The EU AI Act creates explicit contractual obligations between providers and deployers of high-risk AI systems. Article 25 of the AI Act provides that providers may contractually agree to have deployers conduct certain provider obligations on their behalf, and sets out a framework for the contractual allocation of responsibilities. Even where an organisation's AI systems are not high-risk, contracts that do not address AI-specific issues such as data use, output ownership, liability for AI errors, and model changes leave significant risks unallocated.
When procuring AI services, an organisation should ensure that the vendor contract addresses the following AI-specific points. First, AI Act compliance: where the AI system is or may be high-risk, the contract should confirm whether the provider has completed the required conformity assessment, registered the system in the EU AI database, and will provide access to technical documentation and instructions for use. The contract should require the vendor to notify the customer if the system is substantially modified in a way that affects its compliance status or risk profile.
Second, data use and training: the contract should address whether customer data will be used to train, fine-tune, or improve the AI model. Use of customer data for training without explicit authorisation is both a contractual issue and a GDPR compliance issue. Where customer data includes personal data, the GDPR's controller-processor framework applies, and a Data Processing Agreement must be in place. The contract should specify clearly whether the vendor may use input data, output data, or usage data for any purpose other than providing the contracted service.
Third, output ownership: the contract should address who owns IP rights in AI-generated outputs. This is particularly important where the AI system generates content (text, code, designs, analyses) that the customer intends to commercialise or publish. Standard software contracts do not address AI-generated output ownership, and the position under EU copyright law is that fully AI-generated works, without sufficient human creative contribution, are not protected by copyright. Contractual provisions should address this gap.
Fourth, liability and indemnities: the contract should allocate liability for AI errors and hallucinations, particularly where the AI system is used to make or support consequential decisions. Standard limitation of liability clauses drafted for software services may not adequately address the profile of AI-specific risks, including errors that are statistically rare but systematically biased, outputs that are plausible but factually wrong, or decisions that are later found to be discriminatory.
Organisations that provide AI services to customers, whether as a core product or as an embedded feature of a broader service, need to address their own AI Act obligations in the contract. Where the AI system is high-risk, the provider must ensure that the deployer receives the instructions for use required by the Act, and should document this delivery in the contract. The contract should define the permitted uses of the AI system and make clear that use outside the defined scope is not covered by the provider's conformity assessment.
Providers should also address the contractual position on substantial modification: if the customer customises or modifies the AI system to the point of substantial modification under the Act's definition, the customer becomes a provider with its own compliance obligations. The contract should define the boundary between permitted configuration and substantial modification, and should allocate the compliance consequences accordingly.
Probably not fully. Standard GDPR Data Processing Agreements address the legal basis for processing, security measures, data subject rights, and breach notification, but typically do not address AI-specific issues such as use of personal data for model training or improvement, logging and audit trail requirements under the AI Act, the provider-deployer responsibility allocation, or the data governance requirements for training data quality. Existing DPAs should be reviewed and updated to address these gaps when the processing involves AI systems.
AI indemnity clauses should address at minimum: infringement of third-party IP rights in the AI system's training data or outputs; regulatory fines resulting from the provider's AI Act compliance failures; and losses resulting from AI outputs that contain factual errors, discriminatory outcomes, or intellectual property violations. The precise scope and allocation of indemnities will depend on the commercial relationship and the nature of the AI system, but parties should not assume that standard software indemnity provisions cover these risks.
Under EU copyright law, copyright protects works that are the result of the author's own intellectual creation. Fully AI-generated outputs without sufficient human creative input do not attract copyright protection and are in the public domain. Where a human provides sufficient creative direction and editorial contribution to an AI-assisted output, the human author may own the copyright in the result. Contracts should address ownership of AI-generated outputs explicitly, rather than relying on implied terms or analogies to software licensing, because the legal position is genuinely uncertain in many jurisdictions and the commercial stakes are significant.
Contracts should identify, for each AI system covered, whether the vendor is the AI Act provider and the customer is the deployer, or whether responsibilities are shared or allocated differently. They should confirm that the provider has completed any required conformity assessment and will maintain the system's compliance status. They should obligate the provider to notify the customer of substantial modifications, compliance incidents, or regulatory investigations. And they should address the contractual mechanism by which the provider delegates any provider obligations to the deployer, as permitted under Article 25 of the AI Act, with appropriate governance requirements.
