Search

Why Standard IT Contracts Are Not Enough

When you procure an AI system — whether as a SaaS subscription, an API integration, a licensed software product, or a bespoke development — the contract governing that procurement needs to address issues that traditional IT agreements were not designed for. Standard software licensing agreements and SaaS terms typically address availability, performance metrics, data processing, and intellectual property. They rarely address the specific regulatory obligations that the AI Act imposes on providers and deployers, the unique risks that AI systems create (bias, accuracy degradation, unexplainable outputs), or the allocation of responsibilities between parties in the AI value chain.

As the AI Act’s provisions take effect — particularly the high-risk system requirements from August 2026 — every AI procurement contract should include provisions that address the regulatory framework. This article outlines the key clauses to consider.

Compliance Warranties and Representations

The most fundamental AI-specific provision is the provider’s warranty regarding AI Act compliance. If the AI system is classified as high-risk, the provider should warrant that it has conducted the required conformity assessment, that the system bears the CE marking, that the system is registered in the EU database for high-risk AI systems, that it has prepared and maintains the technical documentation required by Article 11, that the system is designed to meet the requirements for risk management, data governance, transparency, human oversight, accuracy, robustness, and cybersecurity, and that it will maintain a post-market monitoring system as required.

For GPAI models, the warranty should cover compliance with the Chapter V transparency and documentation requirements, including the training data summary and copyright compliance policy.

For all AI systems, regardless of risk classification, the provider should represent that the system complies with applicable AI Act provisions (including the prohibited practices rules and transparency obligations) and that it will continue to comply as further provisions take effect.

These warranties should be specific rather than generic. A blanket statement that the provider complies with all applicable laws is less useful than specific representations about conformity assessment status, risk classification, and documentation availability.

Information and Documentation Access

The AI Act requires providers to supply deployers with comprehensive instructions for use (Article 13) and requires deployers to use the system in accordance with those instructions (Article 26). The contract should ensure that the provider delivers instructions for use that meet the AI Act’s requirements, that technical documentation is available to the deployer to the extent needed for compliance, that the provider discloses the system’s intended purpose, capabilities, limitations, and known risks, that the provider notifies the deployer of any material changes to the system that affect its risk profile or compliance status, and that the deployer has access to information needed to fulfil its own AI Act obligations (monitoring, logging, fundamental rights impact assessment, transparency to affected persons).

Data Governance and Training Data

If the AI system processes personal data, the contract must include a data processing agreement (DPA) that addresses the GDPR requirements. Beyond the standard DPA provisions, AI-specific data clauses should address who is responsible for the quality and representativeness of training data, whether the provider uses the deployer’s data for training or fine-tuning (and if so, under what conditions and limitations), how input data provided by the deployer is handled (stored, processed, used for model improvement, or deleted), and whether the provider’s training data was lawfully obtained and compliant with the TDM opt-out obligations under the DSM Directive.

Intellectual Property

AI-specific IP provisions should address several questions that traditional IP clauses do not cover. Who owns the intellectual property in outputs generated by the AI system when used by the deployer? If the provider uses the deployer’s data to train or improve the model, does the deployer retain rights in that data and any improvements derived from it? Does the provider offer an IP indemnity covering claims that the AI system’s output infringes third-party intellectual property rights? Is the provider’s training data IP-compliant — has the provider obtained necessary licences or relied on valid exceptions (such as the TDM exception)?

IP indemnification for AI-generated output is a particularly important negotiation point. Some providers offer broad indemnification for IP infringement claims related to the system’s output. Others exclude output-related claims entirely or cap their exposure. The deployer’s risk exposure depends on how the AI-generated output is used — publishing AI-generated content, using AI-generated code in products, or relying on AI-generated designs all create different risk profiles.

Liability Allocation

AI systems can cause harm in ways that traditional software typically does not: biased decisions, inaccurate outputs relied upon for consequential decisions, unexplainable recommendations that cannot be justified to regulators or affected individuals. The contract should allocate liability for these risks clearly.

Key provisions include liability for AI Act non-compliance (who bears the cost of fines, remediation, and enforcement actions?), liability for system failures (inaccuracy, bias, security vulnerabilities), indemnification for claims by affected individuals (data subjects, job applicants, consumers), limitation of liability provisions that are calibrated to the specific risks of the AI system (standard limitation clauses may not adequately address AI-specific risks), and insurance requirements (does the provider carry adequate liability insurance for AI-related claims?).

The AI Act’s allocation of primary obligations to the provider and secondary obligations to the deployer should be reflected in the contractual liability framework. But the contract can allocate risk between the parties more granularly than the regulation does — for example, the provider might assume greater liability for system defects, while the deployer assumes liability for using the system outside its intended purpose.

Human Oversight and Monitoring

For high-risk AI systems, the AI Act requires deployers to implement human oversight and monitor the system’s operation. The contract should support these obligations by ensuring that the provider designs the system to enable effective human oversight, that the provider makes available the information needed for monitoring (performance metrics, drift indicators, incident alerts), that the provider grants the deployer access to system logs as required by Article 12, and that the provider cooperates with the deployer’s monitoring activities and responds promptly to reports of issues.

Audit Rights

Given the complexity of AI systems and the difficulty of verifying compliance from the outside, deployers should consider including audit rights that allow them (or their agents) to verify the provider’s compliance with the AI Act and with the contractual commitments. Audit provisions should specify the scope of the audit (technical documentation, conformity assessment records, data governance practices), the frequency and notice requirements, access to relevant personnel and systems, confidentiality protections, and the consequences of findings of non-compliance.

Change Management and Version Control

AI systems are not static. Models are updated, retrained, and fine-tuned. The contract should address how changes to the AI system are communicated to the deployer, whether material changes require the deployer’s consent or merely notification, whether a new conformity assessment is triggered by substantial modifications (as required by the AI Act), and the deployer’s right to decline updates that change the system’s risk profile or intended purpose.

Exit and Data Portability

When the contractual relationship ends, the deployer needs to be able to transition to an alternative system or bring the capability in-house. AI-specific exit provisions should address the return or deletion of the deployer’s data (including any data used for fine-tuning), access to model configurations or parameters that the deployer needs for continuity, transition assistance to ensure the deployer can meet its ongoing AI Act obligations during the migration period, and data portability in formats that allow the deployer to recreate or replicate the functionality with another provider.

Practical Negotiation Points

In practice, negotiating AI clauses often comes down to several key tensions. Providers prefer to limit their warranties and representations to what they can control; deployers need assurances that cover their regulatory exposure. Providers prefer to limit liability for AI output; deployers need protection against claims arising from reliance on that output. Providers resist broad audit rights; deployers need verification mechanisms to fulfil their own compliance obligations.

The right balance depends on the specific system, the risk classification, and the relative bargaining position of the parties. But the starting point should always be a clear understanding of how the AI Act allocates obligations between provider and deployer — and a contract that reflects and supplements that allocation.

If you need to review or negotiate AI procurement contracts, get in touch or schedule a meeting with our team.

Bart Lieben
Attorney-at-Law
key takeaways

More related articles

Pitch Chatbot
Contact us right away
Pitch Chatbot
Hi there,
How can we help you today?
Start Whatsapp Chat
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info
No items found.