
Frequently asked questions.
(The EU AI Act)
What is the EU AI Act?
It’s Regulation (EU) 2024/1689, the world’s first comprehensive, legally binding framework for AI. It harmonises rules for developing, placing on the market, and using AI in the EU, with the twin aims of safeguarding fundamental rights/safety and fostering innovation. It entered into force on 1 August 2024.
When do its rules start to apply?
The Act phases in:
2 Feb 2025 — Chapters I–II (general provisions) and the prohibitions apply.
2 Aug 2025 — Obligations for general-purpose AI (GPAI) and certain procedural rules apply.
2 Aug 2026 — Most remaining obligations (including high-risk) apply.
2 Aug 2027 — Article 6(1) (part of the high-risk classification) applies.
Who must comply?
The Act has a broad scope. It applies to providers placing AI on the EU market or putting it into service, deployers using AI in the EU, importers/distributors, and, for certain parts, to persons affected in the Union—regardless of where the provider is established.
How does the risk-based approach work?
The Act organises obligations by risk to health, safety, and fundamental rights:
Unacceptable risk (prohibited). Bans include: social scoring by public authorities; untargeted scraping to build facial recognition databases; emotion recognition at work and in schools; biometric categorisation to infer sensitive attributes (e.g., religion, sexual orientation); and real-time remote biometric identification (RBI) in public spaces by law enforcement, subject to narrow, listed exceptions (e.g., locating victims, preventing specific imminent threats, or identifying serious crime suspects under prior judicial/administrative authorisation).
High risk (strict obligations). High-risk uses (Annex III) span, for example: critical infrastructure safety components; education (admissions, grading, invigilation); employment (recruiting, promotion, monitoring); access to essential public/private services (e.g., creditworthiness of natural persons, life/health-insurance pricing, public-benefit eligibility); certain law-enforcement, migration/asylum, border control uses; and administration of justice/democratic processes.
Limited risk (transparency). Users must be told when they interact with AI (unless obvious). Deployers must label deepfakes (with tailored exceptions) and disclose AI-generated text when published to inform the public (unless there is editorial control with editorial responsibility).
Minimal risk. Most other AI (e.g., spam filters, game AI) face no specific obligations but may follow voluntary codes.
What must providers of high-risk systems do?
High-risk systems must (among other things) implement a documented risk-management system, ensure data and data-governance quality (training/validation/testing), prepare technical documentation, enable logging/record-keeping, provide transparent instructions, ensure effective human oversight, and achieve appropriate accuracy, robustness, and cybersecurity. A quality-management system, conformity assessment, and CE marking are required; registration in the EU database is also necessary for most high-risk systems before market placement or use (with secure non-public sections for inevitable law-enforcement/migration uses and national-level registration for some critical-infrastructure systems).
Do deployers (users) have any duties as well?
Yes. Deployers of high-risk AI must, among other obligations, use the provider’s instructions, ensure competent human oversight, inform affected persons where applicable, cooperate with authorities, and—if they are public bodies or private entities providing public services (and for certain finance/insurance cases)—carry out a fundamental-rights impact assessment (FRIA) before deploying and notify the results to the market-surveillance authority.
What are the transparency duties for “limited-risk” AI?
There are three primary duties in relation to “limited-risk” AI:
Inform people that they are interacting with an AI system (unless it is obvious).
Inform people when emotion recognition or biometric categorization is used (with limited law enforcement exceptions).
Label deepfakes and disclose AI-generated/manipulated text when published to inform the public (with tailored exemptions).
What is the difference between an AI system and a GPAI model?
The Act defines an AI system as a machine-based system that, for explicit or implicit objectives, infers from inputs how to generate outputs (predictions, content, recommendations, decisions) that can influence physical or virtual environments. In practice, the system is the end product used for a specific purpose. A general-purpose AI (GPAI) model is a broadly capable model that can be integrated into many systems. The Act regulates systems and models differently.
How are GPAI models regulated?
All GPAI model providers must, among other things, maintain technical documentation, share sufficient information with downstream system providers, publish a sufficiently detailed training data summary, and have a copyright compliance policy (including respecting text-and-data-mining opt-outs under EU law). Open-source GPAI models enjoy a tailored documentation exception—but not if they present systemic risk.
What counts as a GPAI model with “systemic risk”?
A GPAI model is presumed to have high-impact capabilities—and thus “systemic risk”—if the training compute exceeds 10^25 FLOPs. The Commission (via the AI Office) can also designate systemic risk based on capabilities/impact criteria (Annex XIII). Systemic-risk models must undergo evaluations (including adversarial testing/red-teaming), continuously assess/mitigate systemic risks, report serious incidents, and ensure enhanced cybersecurity.
Who enforces the Act?
Member States’ market-surveillance authorities enforce most obligations. The European AI Office (within the Commission) has exclusive supervisory/enforcement powers for GPAI-model obligations and coordinates with a Member-State AI Board.
What are the penalties?
Maximum administrative fines include (for companies):
Up to €35m or 7% of global annual turnover for prohibited AI practices.
Up to €15m or 3% for other non-compliance.
Up to €7.5m or 1% for supplying incorrect information. (SME-proportionate caps apply.) For GPAI-model providers, the Commission may impose fines for breaches of Chapter V.
What is the EU database, and when is registration required?
Before placing most Annex III high-risk systems on the market or putting them into service, providers (and certain public-sector deployers) must register in the EU database run by the Commission. Some sensitive uses are recorded in a secure, non-public section, and some critical-infrastructure systems are registered at the national level. The public section is designed to be accessible and machine-readable.
Does the Act support innovation (e.g., sandboxes)?
Yes. It provides for regulatory sandboxes and controlled real-world testing under authority supervision to help organisations test novel AI while ensuring safeguards.