EU AI Act
EU Artificial Intelligence Act
The world's first comprehensive legal framework for artificial intelligence, establishing risk-based rules for AI systems placed on or used in the EU market.
Overview
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive horizontal legislation governing artificial intelligence. It entered into force on August 1, 2024, and applies obligations in phases: prohibited AI practices became enforceable from February 2, 2025; rules for General-Purpose AI (GPAI) models apply from August 2025; and the full high-risk AI obligations take effect from August 2026. The regulation applies to providers, deployers, importers, distributors, and product manufacturers involved with AI systems in the EU.
The EU AI Act uses a risk-based framework with four tiers. Unacceptable risk AI systems are outright prohibited — these include systems that manipulate individuals subliminally, exploit vulnerabilities, use social scoring for public authorities, and most real-time biometric surveillance in public spaces. High-risk AI systems, covering applications in critical infrastructure, education, employment, essential services, law enforcement, migration, and justice, face the most onerous obligations including conformity assessments and registration. Limited risk systems (such as chatbots) must meet transparency requirements. Minimal risk systems face no mandatory obligations.
A key addition is the regulation of General-Purpose AI (GPAI) models, including foundation models such as large language models. Providers of GPAI models must maintain technical documentation, comply with copyright law, and publish summaries of training data. Providers of 'systemic risk' GPAI models — those trained with compute above 10^25 FLOPs — face additional obligations including model evaluation, adversarial testing, and incident reporting to the AI Office.
Scope & Applicability
The EU AI Act applies to providers placing AI systems on the EU market or putting them into service in the EU; deployers using AI systems in the EU; importers and distributors of AI systems; product manufacturers; and providers/deployers of AI systems established in third countries where the output is used in the EU. It covers AI systems as defined in the regulation — machine-based systems that generate outputs such as predictions, recommendations, decisions, or content that influence real or virtual environments.
Key Principles
- 1Risk-Based Approach — obligations are proportionate to the risk level of the AI system (unacceptable, high, limited, minimal)
- 2Human Oversight — high-risk AI systems must allow human oversight and intervention during operation
- 3Transparency — AI systems interacting with humans must be designed to be identifiable as AI where not obvious
- 4Robustness and Accuracy — high-risk AI must achieve appropriate levels of accuracy, robustness, and cybersecurity
- 5Data Governance — training, validation, and testing datasets must be subject to appropriate governance practices
- 6Accountability — providers must establish quality management systems and implement post-market monitoring
- 7Fundamental Rights Protection — deployers of high-risk AI must conduct fundamental rights impact assessments
Data Subject Rights
Individuals subject to high-risk AI decisions in areas like employment, credit, or essential services can request meaningful explanations of the logic and likely consequences of the AI output.
Individuals have the right to request human oversight and challenge decisions made by high-risk AI systems, complementing GDPR's rights against automated decision-making.
Users interacting with AI systems, including emotion recognition systems, must be informed they are interacting with an AI where it would not otherwise be obvious.
Individuals are protected from manipulative AI, social scoring systems, real-time biometric surveillance in public spaces, and AI that exploits vulnerabilities based on age, disability, or social circumstances.
Individuals are protected from AI systems that infer sensitive characteristics (race, political opinions, religion, health, sexual orientation) from biometric data in most contexts.
Business Obligations
AI System Risk Classification
Providers and deployers must classify each AI system according to the risk framework and identify applicable obligations before placing the system on the market or into service.
Conformity Assessment for High-Risk AI
High-risk AI systems must undergo conformity assessment (self-assessment or third-party audit) before deployment, and a CE marking must be affixed to hardware products.
Technical Documentation
High-risk AI providers must prepare and maintain comprehensive technical documentation covering system design, intended purpose, training data, accuracy metrics, and risk management.
Quality Management System (QMS)
Providers of high-risk AI systems must implement a QMS covering all stages from development through post-market monitoring, including processes for data governance, testing, and incident management.
EU AI Database Registration
High-risk AI systems must be registered in the EU database managed by the AI Office before being placed on the market or put into service.
Post-Market Monitoring and Incident Reporting
Providers must monitor deployed AI systems for risks, collect and analyse user feedback, and report serious incidents to national authorities without undue delay.
Fundamental Rights Impact Assessment (FRIA)
Deployers of high-risk AI systems in public sector and certain regulated industries must conduct and document a FRIA before deployment.
Cross-Border Transfer Rules
The EU AI Act has extraterritorial reach: it applies to providers and deployers outside the EU when their AI systems' outputs are used within the EU. Non-EU providers must designate an authorised representative established in the EU. For GPAI models, providers outside the EU must comply with the same obligations as EU-based providers when making models available in the EU. High-risk AI systems imported into the EU must meet all EU AI Act requirements, and importers bear liability if a system is non-conforming.
Breach Notification Requirements
Serious incidents involving high-risk AI (injury, death, damage to infrastructure, rights violations) must be reported immediately and in any event within 15 days
National market surveillance authority of the member state where the incident occurred; AI Office for GPAI model incidents
Individuals directly affected by a serious incident may need to be notified under national law or in conjunction with GDPR breach obligations
How TruePrivacy Helps
Purpose-built tools for every EU AI Act obligation.
TruePrivacy's AI register automatically classifies each AI system or GPAI model against the EU AI Act's risk tiers and surfaces the specific obligations that apply.
Combined Fundamental Rights Impact Assessment and DPIA workflows ensure that AI deployments meet both EU AI Act and GDPR requirements in a single coordinated process.
Pre-built documentation templates aligned with Annex IV of the EU AI Act streamline conformity assessment preparation for high-risk AI providers.
Guided workflows prepare and submit all required information to the EU AI database, with automatic tracking of registration status and renewal deadlines.
Monitor deployed AI systems for anomalies, collect user feedback, and manage the incident reporting lifecycle from detection through regulatory notification.
Specifically designed for GPAI model providers, tracking training data documentation, copyright compliance, systemic risk thresholds, and AI Office reporting obligations.
Ready to achieve EU AI Act compliance?
TruePrivacy automates your compliance workflows so your team can focus on what matters.