Europe’s three biggest economies have opposed regulating the most powerful types of artificial intelligence, putting the fate of the bloc’s pioneering artificial intelligence law at stake.
France, Germany and Italy are blocking negotiations on a controversial section of the EU’s AI bill so that it does not hinder Europe’s development of ‘base models’, a AI infrastructure that underpins large language models like OpenAI’s GPT and Google’s Bard.
Government officials say imposing harsh restrictions on these new models would harm the EU’s own champions in the race to exploit AI technology.
In a joint document shared with other EU governments, obtained by POLITICO, the three European heavyweights said that Europe needs a “regulatory framework that promotes innovation and competition, so that European actors can emerge and carry our voice and our values in the global AI race.” » The paper suggests models for self-regulated foundations through corporate commitments and codes of conduct.
The Franco-German-Italian charge pits them against European legislators, who firmly wish to curb foundation models.
“This is a declaration of war,” said a member of the European Parliament’s negotiating team, who requested anonymity due to the sensitivity of the negotiations.
The impasse could even mean the end of negotiations on the artificial intelligence law. Interinstitutional negotiations on the law have stalled at EU level after parliamentary staff walked out of a meeting with government representatives from the EU Council and European Commission officials in mid-November, in response to the resistance of the three countries to regulate. foundation models.
The talks are under intense pressure, as negotiators face a December 6 deadline. With the European Parliament re-elected in June 2024, the window of opportunity to pass the law is quickly closing.
An anti-European act
The desire to scale back the European regulatory model is surprising because it breaks with the continent’s traditional thinking that the technology sector needs stronger regulation.
What’s more, it comes at a time when major leaders in the artificial intelligence industry have called for strict regulation of their technology and countries like the United States – a long-time supporter of laws on light tech – are rolling out their own regulatory agenda via a sweeping Executive Order on AI.
Ignoring the fundamental models (and therefore the most advanced of them, called “frontier models” by industry insiders) would be “crazy” and risk making European AI law “the law of jungle,” Canadian computer scientist Yoshua Bengio, a leading voice on AI policy, said in an interview last week.
“We could end up in a world where harmless AI systems are heavily regulated in the EU… and where the largest, most dangerous, most potentially harmful systems are unregulated,” Bengio added.
The European Union widely supports the ban and strict rules regarding AI applications based on their use in sensitive scenarios like education, immigration and the workplace. Foundation models can perform multiple tasks, making it difficult to predict their risk level.
In their proposal, EU parliamentarians planned to add obligations for developers of foundation models, regardless of the intended use of the system, including mandatory testing of models by third-party experts. Some obligations would only apply to models with greater computing power, creating a two-tiered set of rules that all three governments explicitly rejected in their document.
While other EU countries – notably Spain, which holds the rotating presidency of the Council – are in favor of expanding the scope of the AI law to cover foundation models, the Council now has little room to deviate from the position of the Big Three.
Gn En world