Back to Blog
AI Governance

AI Governance Under the EU AI Act: A Practical Framework

The EU AI Act is now in effect. Here's how to classify your AI systems by risk level, conduct conformity assessments, and build a governance program that satisfies regulators.

Siddharth RaoMarch 13, 202614 min read

The EU AI Act: A New Era for AI Regulation

The EU Artificial Intelligence Act entered into force in August 2024, making it the world's first comprehensive legal framework specifically governing artificial intelligence systems. While some provisions are already applicable and others phase in over 12 to 36 months, the Act's risk-based approach to AI regulation will shape how organisations develop, deploy, and govern AI for decades to come.

Unlike sector-specific AI rules that existed previously, the EU AI Act applies horizontally across all industries and use cases. It affects not just AI developers but also organisations that deploy AI systems developed by others. For privacy and compliance teams, the Act introduces a new category of obligations that intersects significantly with existing data protection requirements under GDPR.

The Four-Tier Risk Classification System

The EU AI Act's central innovation is its risk-based classification system, which determines the compliance obligations applicable to each AI system. The four tiers — Unacceptable Risk, High Risk, Limited Risk, and Minimal Risk — are not arbitrary categories but reflect the potential for AI systems to cause harm to individuals, society, or fundamental rights.

Unacceptable Risk systems are prohibited outright. These include AI systems that exploit vulnerabilities of specific groups, systems that use subliminal techniques to distort behaviour, real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions), and social scoring systems by public authorities. Any organisation deploying a system that might fall into this category must conduct an immediate assessment.

High-Risk AI: Obligations and Conformity Assessment

High-Risk AI systems — defined by their deployment context in Annex III of the Act — include AI used in critical infrastructure, biometric identification, employment and recruitment, credit scoring, access to essential services (healthcare, education), law enforcement, and administration of justice. These systems can be deployed, but only after a conformity assessment and ongoing compliance obligations are satisfied.

Conformity assessment for most High-Risk systems is self-assessment: the provider conducts the assessment following the requirements of the Act and declares conformity. For certain biometric and law enforcement systems, assessment by an accredited notified body is required. The substance of the assessment covers risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity.

Building a Risk Management System

Article 9 of the EU AI Act requires providers of High-Risk AI systems to establish, implement, document, and maintain a Risk Management System throughout the AI system's lifecycle. This is not a one-time exercise but a continuous process that includes identification and analysis of known and foreseeable risks, estimation and evaluation of risks that may emerge in intended use and reasonably foreseeable misuse, and adoption of risk management measures.

In practice, an EU AI Act Risk Management System should be integrated with existing enterprise risk management and privacy risk frameworks. Many of the inputs — data quality assessments, impact assessments on individuals, monitoring of model performance — are shared with DPIA processes under GDPR. Organisations with mature privacy programmes have a head start.

Technical Documentation and Logging Requirements

High-Risk AI systems must be accompanied by comprehensive technical documentation covering the system's intended purpose, development methodology, performance metrics, training data governance, and testing results. This documentation must be kept up to date and made available to competent authorities on request.

Automatic logging of system operation — capturing events, inputs, and outputs with sufficient detail to enable post-hoc evaluation — is mandatory for High-Risk systems. Log retention periods are prescribed for different system categories. Building documentation habits into the AI development lifecycle from the start is far more effective than retrospective documentation exercises.

Human Oversight: The Core Safeguard

A defining feature of the EU AI Act's approach to High-Risk AI is its insistence on meaningful human oversight. Article 14 requires that High-Risk AI systems be designed and developed so that natural persons can effectively oversee them and, where necessary, override or interrupt their operation. Human oversight measures must be appropriate to the risk and built into the system by design.

Effective human oversight under the Act means more than having a human 'in the loop' nominally. It requires that the overseeing person understands the system's capabilities and limitations, can interpret its outputs critically, and has the authority and means to intervene. Organisations should conduct oversight capability assessments to verify their human reviewers genuinely have the tools, training, and time to exercise meaningful oversight.

Building Your AI Governance Programme

An effective EU AI Act governance programme starts with an AI inventory — a comprehensive list of all AI systems in use across the organisation, including third-party AI embedded in purchased software. Each system should be classified by risk tier, with the classification rationale documented. This inventory forms the foundation for all downstream governance activities.

Governance structures should include an AI risk committee with cross-functional membership (legal, privacy, technology, business), a review process for new AI deployments, ongoing monitoring of existing systems, and an incident response procedure for AI-related harms. Organisations that have already built GDPR compliance governance structures can extend and adapt them for AI Act compliance, leveraging shared data governance infrastructure and privacy expertise.

Automate your privacy compliance

See how TruePrivacy can handle DSRs, consent, and breach response — all in one platform.