The
AI Act
(Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence) is the first-ever comprehensive legal framework on AI worldwide. The aim of the rules is to foster trustworthy AI in Europe.
The AI Act sets out a clear set of risk-based rules for AI developers and deployers regarding specific uses of AI. The AI Act is part of a wider package of policy measures to support the development of trustworthy AI, which also includes the
AI Innovation Package
, the launch of
AI Factories
and the
Coordinated Plan on AI
. Together, these measures guarantee safety, fundamental rights and human-centric AI, and strengthen uptake, investment and innovation in AI across the EU.
To facilitate the transition to the new regulatory framework, the Commission has launched the
AI Pact
, a voluntary initiative that seeks to support the future implementation, engage with stakeholders and invite AI providers and deployers from Europe and beyond to comply with the key obligations of the AI Act ahead of time.
Why do we need rules on AI?
The AI Act ensures that Europeans can trust what AI has to offer. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes.
For example, it is often not possible to find out why an AI system has made a decision or prediction and taken a particular action. So, it may become difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or in an application for a public benefit scheme.
Although existing legislation provides some protection, it is insufficient to address the specific challenges AI systems may bring.
A risk-based approach
The AI Act defines 4 levels of risk for AI systems:
Unacceptable risk
All AI systems considered a clear threat to the safety, livelihoods and rights of people are banned. The
AI Act prohibits eight practices
, namely:
harmful AI-based manipulation and deception
harmful AI-based exploitation of vulnerabilities
social scoring
Individual criminal offence risk assessment or prediction
untargeted scraping of the internet or CCTV material to create or expand facial recognition databases
emotion recognition in workplaces and education institutions
biometric categorisation to deduce certain protected characteristics
real-time remote biometric identification for law enforcement purposes in publicly accessible spaces
High risk
AI use cases that can pose serious risks to health, safety or fundamental rights are classified as high-risk. These
high-risk
use-cases include:
AI safety components in critical infrastructures (e.g. transport), the failure of which could put the life and health of citizens at risk
AI solutions used in education institutions, that may determine the access to education and course of someone’s professional life (e.g. scoring of exams)
AI-based safety components of products (e.g. AI application in robot-assisted surgery)
AI tools for employment, management of workers and access to self-employment (e.g. CV-sorting software for recruitment)
Certain AI use-cases utilised to give access to essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan)
AI systems used for remote biometric identification, emotion recognition and biometric categorisation (e.g AI system to retroactively identify a shoplifter)
AI use-cases in law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence)
AI use-cases in migration, asylum and border control management (e.g. automated examination of visa applications)
AI solutions used in the administration of justice and democratic processes (e.g. AI solutions to prepare court rulings)
High-risk AI systems
are subject to
strict obligations
before they can be put on the market:
adequate risk assessment and mitigation systems
high-quality of the datasets feeding the system to minimise risks of discriminatory outcomes
logging of activity to ensure traceability of results
detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance
clear and adequate information to the deployer
appropriate human oversight measures
high level of robustness, cybersecurity and accuracy
Limited risk
This refers to the risks associated with a need for transparency around the use of AI. The AI Act introduces specific disclosure obligations to ensure that humans are informed when necessary to preserve trust. For instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can take an informed decision.
Moreover, providers of generative AI have to ensure that AI-generated content is identifiable. On top of that, certain AI-generated content should be clearly and visibly labelled, namely deep fakes and text published with the purpose to inform the public on matters of public interest.
Minimal or no risk
The AI Act does not introduce rules for AI that is deemed minimal or no risk. The vast majority of AI systems currently used in the EU fall into this category. This includes applications such as AI-enabled video games or spam filters.
How does it all work in practice for providers of high-risk AI systems?
How does it all work in practice for providers of high-risk AI systems?
Once an AI system is on the market, authorities are in charge of market surveillance, deployers ensure human oversight and monitoring, and providers have a post-market monitoring system in place. Providers and deployers will also report serious incidents and malfunctioning.
A solution for the trustworthy use of large AI models
General-purpose AI (GPAI) models can perform a wide range of tasks and are becoming the basis for many AI systems in the EU. Some of these models could carry systemic risks if they are very capable or widely used. To ensure safe and trustworthy AI, the AI Act puts in place rules for providers of such models. This includes transparency and copyright-related rules. For models that may carry systemic risks, providers should assess and mitigate these risks. The AI Act rules on GPAI became effective from August 2025.
Supporting compliance
In July 2025, the Commission introduced 3 key instruments to support the responsible development and deployment of GPAI models:
The
Guidelines on the scope of the obligations for providers of GPAI models
clarify the scope of the GPAI obligations under the AI Act, helping actors along the AI value chain understand who must comply with these obligations.
The
GPAI Code of Practice
is a voluntary compliance tool submitted to the Commission by independent experts, which offers practical guidance to help providers comply with their obligations under the AI Act related to transparency, copyright, and safety & security.
The
Template for the public summary of training content of GPAI models
requires providers to give an overview of the data used to train their models. This includes the sources from which the data was obtained (comprising large datasets and top domain names).The template also requests information about data processing aspects to enable parties with legitimate interests to exercise their rights under EU law.
These tools are designed to work hand-in-hand. Together, they provide a clear and actionable framework for providers of GPAI models to comply with the AI Act, reducing administrative burden, and fostering innovation while safeguarding fundamental rights and public trust.
Governance and implementation
The
European AI Office
and authorities of the Member States are responsible for implementing, supervising and enforcing the AI Act. The AI Board, the Scientific Panel and the Advisory Forum steer and advise the AI Act’s governance. Find out more details about the
Governance and enforcement of the AI Act
.
Application timeline
The AI Act entered into force on 1 August 2024, and will be fully applicable 2 years later on 2 August 2026, with some exceptions:
prohibitions and AI literacy obligations entered into application from 2 February 2025
the governance rules and the obligations for GPAI models became applicable on 2 August 2025
the rules for high-risk AI systems - embedded into regulated products - have an extended transition period until 2 August 2027
01-08-2025
Commission Opinion on the assessment of the General-Purpose AI Code of Practice
24-07-2025
Explanatory Notice and Template for the Public Summary of Training Content for general-purpose AI models
18-07-2025
Guidelines on the scope of obligations for providers of general-purpose AI models under the AI Act
07-07-2025
Unlocking the potential of artificial intelligence for sustainable agriculture
16-06-2025
Digital Decade 2025 - Digitalisation of Business in the EU Member States