The rapid advance of Artificial Intelligence (AI) has generated ethical debates and concerns about its impact on society. To address these challenges, the European Parliament passed a new law on March 13, 2024, establishing a comprehensive regulatory framework for developing and using AI in the European Union (EU).
Definition of the Artificial Intelligence system

The artificial intelligence system adopts the formula for its exact definition: “a machine-based system designed to function with different levels of autonomy, which can show the capacity to adapt after deployment and which, for explicit or implicit objectives, infers from the input information it receives how to generate output information, such as predictions, content, recommendations or decisions, which can influence physical or virtual environments.”

The proposed regulation’s new rules seek to ensure the safety and security of European citizens while promoting innovation and competitiveness in the field of AI. One key aspect of this legislation is the classification of AI systems into different risk levels, from low to high, determined by their potential to cause harm.

Risk assessments

Under the law, AI systems considered high risk, such as those used in healthcare, autonomous transportation, and the administration of justice, will be subject to stricter transparency, oversight, and control requirements. This includes the obligation to carry out risk assessments, maintain detailed records and ensure the traceability and explainability of decisions made by these systems.

In addition, the regulations prohibit certain AI practices considered especially dangerous or discriminatory, such as using real-time facial recognition technologies for mass surveillance purposes, except in specific and strictly regulated cases. This prohibition reflects the EU’s unwavering commitment to protecting individual rights and citizens’ privacy.

Governance system

Another crucial aspect is creating an AI governance system at the EU level, which will monitor compliance with these rules and ensure consistency in their application across Member States. Cooperation and coordination mechanisms will be established between national authorities and the European Commission to address cross-border challenges and promote best practices.

These new regulations represent a significant step towards ethical and responsible AI in Europe. By encouraging responsible innovation and protecting fundamental rights, the EU seeks to lead the development of AI globally, setting a standard for this technology’s safe and ethical adoption.

Application

Its application will require a series of coordinated actions and a combination of regulatory, educational, and collaborative approaches to ensure that the technology is used ethically, safely, and beneficially. All of this will require technical guidelines and standards, training, awareness, supervision, evaluation, and monitoring, as well as financial support for companies to adapt to regulatory requirements.

If you have any questions, do not hesitate to contact our team: