Artificial intelligence is not a single legal phenomenon. The term covers a diverse and rapidly evolving set of technologies — from narrow, rule-based systems that have been in commercial use for decades to large-scale generative models that produce original content indistinguishable from human output. The legal questions raised by AI differ depending on the technology involved, the context in which it is deployed, and the legal frameworks applicable to that deployment. What distinguishes the current moment is that regulators in the EU and beyond have moved from observation to action: AI is now the subject of binding legislation, not merely guidance or soft law recommendations.
For businesses that develop, procure, or deploy AI systems, the legal landscape has changed fundamentally since 2024. The EU AI Act imposes mandatory compliance requirements on providers and deployers of AI systems, with the most demanding obligations applying to high-risk AI and with significant penalties for non-compliance. The GDPR applies wherever AI processes personal data, which in commercial contexts is most of the time. Copyright law determines who owns AI-generated content and whether training AI models on third-party works requires a licence. Employment law governs AI's use in HR decisions. Contract law must now address AI-specific risks that standard software terms do not contemplate. No single legal framework provides a complete answer — AI requires a multi-disciplinary legal analysis that draws on IP, data protection, employment, contract, and regulatory law simultaneously.
The EU AI Act (Regulation (EU) 2024/1689), in force since August 2024, is the foundational regulatory framework for AI in the EU. Its approach is risk-based: AI systems are classified by the risk they pose, and compliance obligations are proportionate to that risk. At one extreme, a small number of AI applications are flatly prohibited — subliminal manipulation, social scoring, real-time biometric identification in public spaces. At the other extreme, the vast majority of AI applications carry no mandatory obligations beyond the general prohibited practices framework. In between, high-risk AI systems — those used in employment, education, access to essential services, law enforcement, and similar contexts — are subject to a demanding pre-market conformity assessment regime, technical documentation requirements, and post-deployment monitoring obligations.
General-purpose AI models — the large language models and foundation models that power an increasing proportion of commercial AI applications — are subject to a distinct set of obligations that came into effect in August 2025. These include transparency about training data, copyright compliance, and, for models above a compute threshold that indicates systemic risk, adversarial testing and incident reporting requirements.
AI generates novel IP questions at both ends of the AI value chain. On the input side, training AI models on large datasets of text, images, and other content raises copyright questions: does training on third-party copyright-protected works require a licence, or does it fall within the text and data mining exception? In the EU, the CDSM Directive provides a TDM exception subject to rights holder opt-out; rights holders who have opted out must be accommodated or licensed. On the output side, EU copyright law's originality requirement — that a work reflect the author's own intellectual creation — requires human authorship, meaning that purely AI-generated outputs are not copyright-protected and belong to the public domain.
The AI Act addresses the input side specifically for GPAI models: providers must comply with copyright law, respect opt-outs, and provide a summary of their training data. This regulatory requirement sits alongside the civil law copyright claims that rights holders may bring against AI developers, making training data compliance a dual legal exposure for GPAI model operators.
AI systems that process personal data — which includes most commercial AI applications, from customer service bots to recruitment screening tools to fraud detection systems — are subject to the GDPR in full. The GDPR's requirements for lawful basis, data minimisation, purpose limitation, transparency, and data subject rights apply to AI-driven processing in the same way as to any other form of personal data processing. Article 22 GDPR, which provides data subjects with the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects, is particularly relevant for AI systems that make or support consequential decisions affecting individuals.
The EU AI Act defines an AI system as a machine-based system designed to operate with varying degrees of autonomy that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence real or virtual environments. This is a broad definition that captures machine learning models, neural networks, and certain rule-based systems that exhibit adaptiveness. Traditional software that executes fixed rules without learning or adaptation is not an AI system under the Act's definition. Where there is doubt, legal assessment of the specific system against the Act's definition is advisable before concluding that the AI Act does not apply.
The AI Act applies to providers and deployers regardless of size, with one qualification: the penalty regime provides for proportionate fines for SMEs, and the GPAI model provider obligations (including the systemic-risk obligations) are calibrated to threshold compute levels that only the largest model operators reach. However, the substantive compliance obligations for high-risk AI systems — conformity assessment, technical documentation, registration, human oversight — apply equally to an SME that provides a high-risk AI system as to a large enterprise. SMEs should not assume that size creates a regulatory safe harbour for substantive obligations.
Uncertainty about risk classification is common, and the AI Act's carve-out provisions for Annex III systems that do not pose significant risk or perform only preparatory tasks require documented assessment rather than assumption. The appropriate response is to conduct a formal risk classification analysis documented in writing, considering the system's intended use, the sectors in which it operates, whether it influences decisions that have legal or similarly significant effects on individuals, and whether the carve-out conditions are met. This analysis should be conducted by or with legal counsel familiar with the AI Act's classification framework, and the conclusion and reasoning should be documented as part of the organisation's AI governance records.