Tech

EU Council gives green light for risk-based regulation for AI

It’s over: European Union lawmakers have given final approval to the bloc’s flagship risk-based regulation for artificial intelligence.

In a press release confirming the approval of the EU AI law, the Council of the European Union said the law was “groundbreaking” and that “as the first of its kind in the world, it can establish a global standard for AI.” regulation.”

The European Parliament had already approved the legislation in March.

The Council’s approval means the legislation will be published in the bloc’s Official Journal in the coming days and will come into force across the EU 20 days later. The new rules will be implemented in stages, although some provisions will only apply after two years or more.

The law takes a risk-based approach to regulating uses of AI and outright bans a handful of “unacceptable risk” use cases, such as cognitive-behavioral manipulation or social scoring. It also defines a set of “high-risk” uses, such as biometrics and facial recognition, or AI used in areas like education and employment. App developers will need to register their systems and meet risk and quality management obligations to access the EU market.

Another category of AI applications, such as chatbots, are considered “limited risk” and subject to lighter transparency obligations.

The law responds to the rise of generative AI tools with a set of rules for “general purpose AI” (GPAI), like the model that underpins OpenAI’s ChatGPT. However, most GPAIs will only be subject to limited transparency requirements, and only GPAIs that exceed a certain calculation threshold and are considered to present “systemic risk” will face stricter regulation. (For more on how EU AI law responds to GPAI, see our previous report.)

“The adoption of the AI ​​law constitutes an important step for the European Union,” Mathieu Michel, Belgian Secretary of State for Digital Affairs, said in a statement. “This landmark law, the first of its kind in the world, addresses a global technological challenge that also creates opportunities for our societies and economies. With the AI ​​Law, Europe underlines the importance of trust, transparency and accountability when it comes to new technologies, while ensuring that this rapidly evolving technology can thrive and drive growth. European innovation.

Additionally, the law establishes a new governance architecture for AI, including an enforcement body within the European Commission called the AI ​​Office.

There will also be an AI Council made up of representatives from EU Member States to advise and assist the Commission on the consistent and effective application of the AI ​​law – in the same way as the European Committee of Data protection helps guide the application of the GDPR. The Commission will also establish a scientific panel to support monitoring as well as an advisory forum to provide technical expertise.

Standards bodies will play a key role in determining what is required of AI application developers, as the law seeks to replicate the EU’s long-standing approach to product regulation. We should expect the industry to redirect the energy it has focused on lobbying against legislation toward efforts to shape the standards that will be applied to AI developers.

The law also encourages the establishment of regulatory sandboxes to support the development and real-world testing of new AI applications.

It should be noted that although the EU AI law constitutes the bloc’s first comprehensive regulation on artificial intelligence, AI developers may already be subject to existing laws such as copyright law , the GDPR, the bloc’s e-governance regime and various competition laws.

techcrunch

Back to top button