Search

What AI Governance Means

AI governance is the system of policies, processes, roles, and controls through which an organisation manages its development and use of artificial intelligence. It answers the fundamental management questions: who decides which AI systems we use, how do we assess the risks, who is accountable when something goes wrong, and how do we ensure ongoing compliance?

The EU AI Act makes AI governance a regulatory requirement for providers and deployers of high-risk AI systems. But even for organisations that do not operate high-risk systems, a governance framework is good practice — it reduces risk, improves decision-making about AI adoption, and positions the organisation to respond efficiently as the regulatory landscape continues to develop.

AI governance does not require building a new bureaucracy. For most businesses, it means extending existing governance structures — risk management, compliance, procurement, and IT governance — to account for the specific characteristics and risks of AI.

The Components of AI Governance

An effective AI governance framework typically includes several interconnected components.

AI policy. A clear statement of how the organisation approaches AI — when it is appropriate to use AI, what types of AI are permitted, what approvals are required, what the organisation’s risk appetite is, and what ethical principles guide AI deployment. The policy should be practical and specific enough to guide day-to-day decisions, not a generic commitment to responsible AI that nobody reads.

AI inventory. A comprehensive register of all AI systems the organisation develops and uses, including the system’s purpose, its provider (if external), its risk classification under the AI Act, the data it processes, who uses it, and who is responsible for it. The inventory is the foundation of governance — you cannot manage what you have not identified.

Risk assessment process. A defined methodology for evaluating the risks posed by each AI system, including its risk classification under the AI Act, its potential impact on individuals and the organisation, the quality and representativeness of its training data, its accuracy and reliability, and any bias or fairness concerns. The risk assessment should be conducted before deployment and reviewed periodically.

Accountability structure. Clear assignment of roles and responsibilities for AI governance. This typically includes an executive sponsor or AI lead who has overall accountability, functional owners for each AI system (the person responsible for its operation and compliance), a review or oversight function (which may sit within compliance, legal, risk management, or a dedicated AI committee), and IT and data governance teams who manage the technical and data aspects.

Approval process. A defined process for approving the procurement, development, or deployment of new AI systems. The process should include a risk assessment, a review against the AI policy, verification of regulatory compliance (particularly AI Act classification), an assessment of the provider’s compliance posture (for externally sourced systems), and sign-off by the appropriate level of authority based on the risk assessment.

Monitoring and review. Mechanisms for ongoing monitoring of AI systems in operation, including performance monitoring (accuracy, reliability, drift), incident detection and reporting, periodic re-assessment of risk classification, user feedback collection, and review against evolving regulatory requirements.

Starting with the AI Inventory

The AI inventory is the single most important first step. Without it, every subsequent governance activity operates in the dark.

Building an AI inventory requires a systematic survey of the organisation. The challenge is that AI is often embedded in tools that people do not think of as AI. A CRM system that scores leads, a recruitment platform that screens CVs, a customer service tool that routes queries, a document review system that extracts clauses, an analytics platform that forecasts demand — all of these may use AI or machine learning under the hood.

The survey should ask each department what software tools they use, whether those tools incorporate AI or machine learning functionality, what data the tools process, what decisions or recommendations the tools influence, and who has authority over the tool’s configuration and use. The IT department can supplement this with a technical audit of software licences, API integrations, and cloud services.

For each AI system identified, the inventory should record the system name and description, the provider (internal or external), the intended purpose and use case, the risk classification under the AI Act, the data inputs and outputs, the users and affected persons, the functional owner, and the date of last assessment.

Risk Classification in Practice

Once the inventory is complete, each AI system needs to be classified according to the AI Act’s risk framework. This involves screening for prohibited practices (Article 5), checking against the high-risk categories in Annex I (safety components of regulated products) and Annex III (standalone high-risk use cases), assessing whether the Article 6(3) exception applies (narrow procedural tasks that do not materially influence decisions), identifying transparency obligations (limited-risk systems), and confirming minimal-risk status for systems that do not fall into any of the above categories.

The classification should be documented with reasoning — not just the conclusion (minimal risk) but the analysis that supports it. If the classification is later challenged by a regulator, the documentation demonstrates that the organisation conducted a considered assessment rather than simply assuming the lowest risk category.

Integrating with Existing Structures

AI governance should not exist in a silo. Most organisations already have governance frameworks for data protection (GDPR compliance), information security (ISO 27001, NIS2), quality management, procurement, and general compliance. AI governance should integrate with these existing structures rather than creating a parallel system.

The GDPR connection is particularly important. Many AI systems process personal data, which means that AI deployment often triggers both AI Act and GDPR obligations simultaneously. The data protection impact assessment (DPIA) under GDPR Article 35 and the risk management requirements under the AI Act cover overlapping territory. Coordinating these assessments — or conducting a combined AI and data protection impact assessment — avoids duplication and ensures that both regulatory frameworks are addressed.

Similarly, if the organisation already has an information security management system, the AI Act’s cybersecurity requirements for high-risk systems can be integrated into the existing security governance framework rather than managed separately.

The Approval Gateway

The most effective governance intervention point is the approval gateway — the process through which new AI systems are evaluated before they are deployed. If this gateway is well-designed, it catches most governance issues before they become problems.

An effective approval process for a new AI deployment should include an initial screening to determine the AI Act risk classification, a risk assessment covering technical, legal, ethical, and operational dimensions, a review of the provider’s compliance posture (for externally sourced systems — has the provider conducted a conformity assessment, what documentation is available, what commitments does the provider make regarding accuracy, bias, and ongoing monitoring?), a data protection assessment if personal data is involved, verification that the intended use aligns with the organisation’s AI policy, identification of the functional owner and oversight arrangements, and sign-off at the appropriate level of authority.

The level of scrutiny should be proportionate to the risk. A minimal-risk AI tool (a grammar checker, a scheduling assistant) can go through a streamlined process. A high-risk deployment (an AI system that influences hiring decisions, credit assessments, or access to services) needs thorough assessment.

Governance for SMEs

The framework described above may seem extensive for a smaller organisation. It does not need to be. The AI Act recognises that SMEs and startups should not be disproportionately burdened, and governance should be scaled to the organisation’s size and AI footprint.

For a small business that uses a handful of AI tools, effective governance might be as simple as maintaining a spreadsheet-based AI inventory, designating a single person as the AI governance lead (often the same person responsible for data protection), conducting a basic risk classification of each tool, reviewing AI providers’ terms and compliance documentation before procurement, and providing basic AI literacy training to staff.

The point is not bureaucratic perfection. It is that someone in the organisation knows what AI systems are in use, has assessed whether they are compliant, and is responsible for keeping that picture current.

If you need help building an AI governance framework or conducting an AI inventory and risk assessment, get in touch or schedule a meeting with our team.

Bart Lieben
Attorney-at-Law
key takeaways

More related articles

Pitch Chatbot
Contact us right away
Pitch Chatbot
Hi there,
How can we help you today?
Start Whatsapp Chat
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info
No items found.