The regulatory framework for artificial intelligence (AI) in the European Union, established through the Artificial Intelligence Act (AI Act), marks a significant step forward in regulating this technology. Coming into force on August 1, 2024, its primary aim is to ensure that AI development and usage in Europe are safe and respect citizens’ fundamental rights.
The AI Act introduces a progressive definition of AI and adopts a risk-based approach to its regulation. It classifies AI systems into several categories based on their potential risk to health, safety, and fundamental rights. Minimal risk systems, such as spam filters and AI-enabled video games, are not subject to obligations under this law, although companies can voluntarily adopt additional codes of conduct. In contrast, systems with specific transparency risks, like chatbots, must clearly inform users that they are interacting with a machine. Additionally, certain AI-generated content must be explicitly labeled to indicate its artificial nature.
High-risk systems, such as AI-based medical software or recruitment systems, must comply with stringent requirements, including risk mitigation, data quality, human oversight, and robustness. These systems must also log their activities and provide clear information to users. On the other hand, unacceptable risk systems, those posing a clear threat to fundamental rights, are banned. This includes AI applications enabling social scoring by governments or companies and certain uses of biometric recognition in public spaces.
Moreover, the AI Act also regulates general-purpose AI models (GPAI), which are highly capable models designed to perform a wide variety of tasks. The legislation ensures transparency along the value chain and manages the potential systemic risks of these advanced models.
The implementation and supervision of the AI Act will be carried out through national authorities designated by Member States, who have until August 2, 2025, to establish these entities. The AI Office will be the main entity overseeing the application of the rules at the EU level. Additionally, three advisory bodies will support the implementation: the European Artificial Intelligence Board, a scientific panel of independent experts, and a consultative forum composed of various stakeholders.
Companies failing to comply with the AI Act rules will face significant penalties, with fines reaching up to 7% of their global annual turnover for the most serious violations. Most rules will apply starting August 2, 2026, although some specific prohibitions will take effect earlier. To ease the transition, the European Commission has launched the AI Pact, inviting AI developers to voluntarily adopt key obligations of the AI Act ahead of the legal deadlines.
In summary, the AI Act establishes a comprehensive and pioneering regulatory framework globally, ensuring that AI developed and used in the EU is trustworthy and safe, while fostering innovation and protecting citizens’ rights. For more information, you can visit the official page of the European Commission on the AI regulatory framework: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai