The EU AI Act, the first major artificial intelligence project, receives the final green light

Mr.cole_photographer | Instant | Getty Images

European Union member states on Tuesday passed the world’s first major law to regulate artificial intelligence, as institutions around the world work to introduce restrictions on the technology.

The EU Council said it has finally approved the AI ​​Act, a groundbreaking piece of legislation that aims to introduce the first comprehensive set of rules for artificial intelligence.

“The adoption of the AI ​​law constitutes an important step for the European Union,” Mathieu Michel, Belgian Secretary of State for Digitalization, said on Tuesday.

“With the AI ​​Law, Europe underlines the importance of trust, transparency and accountability when it comes to new technologies, while ensuring that this rapidly evolving technology can thrive and boost European innovation”, added Michel.

The AI ​​law applies a risk-based approach to artificial intelligence, meaning that different applications of the technology are treated differently, depending on the threats they pose to society.

The law prohibits AI applications deemed “unacceptable” in terms of risk level. Some forms of unacceptable AI applications include so-called “social scoring” systems that rank citizens based on the aggregation and analysis of their data, predictive policing, and on-the-spot emotional recognition. work and in schools.

Former White House CTO: OpenAI shows it has to make compromises and the security team is one of them

High-risk AI systems cover autonomous vehicles or medical devices, which are assessed based on the risks they pose to the health, safety and fundamental rights of citizens. They also include applications of AI in financial services and education, where there is a risk of bias built into AI algorithms.

Big American tech companies in the spotlight

Matthew Holman, a partner at law firm Cripps, said the rules will have major implications for anyone developing, creating, using or reselling AI in the EU – with US tech companies in the spotlight.

“European AI law is unlike any other law on the planet,” Holman said. “This creates a detailed regulatory regime for AI for the first time.”

“US tech giants are closely monitoring developments in this law,” Holman added. “A lot of funding has gone into public-facing generative AI systems, which will need to ensure compliance with the new law, which is, in some places, quite onerous.”

The European Commission will have the power to fine companies that violate the AI ​​law up to 35 million euros ($38 million) or 7% of their global annual turnover, depending on the highest amount.

EU laws were changed after OpenAI launched ChatGPT.

That’s when officials realized the AI ​​law lacked enough detail to address emerging advanced generative AI capabilities and the risks they posed regarding the use of copyright-protected material. author.

A long road to implementation

For generative AI systems, termed by the EU as “general purpose” AI, the regulation introduces strict restrictions, including requirements to comply with EU copyright law, transparency disclosures on how models are trained, routine testing, and adequate cybersecurity protections.

But it will be some time before the requirements for these general-purpose models are actually implemented, according to Dessi Savova, partner at Clifford Chance. Requirements for general-purpose systems will only come into force 12 months after the AI ​​Act comes into force.

Even then, currently commercially available generative AI systems, like OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot, benefit from a “transition period” that gives them 36 months from effective date to bring their technology into compliance with the legislation.

“An agreement has been reached on the AI ​​law – and this regulation is about to become a reality,” Savova told CNBC via email. “Now the focus must turn to the effective implementation and enforcement of the AI ​​law.”


Back to top button