EU AI regulation efforts hit a snag

An illustration of AI atop a computer motherboard.
An illustration of AI atop a computer motherboard.
Dado Ruvic/Illustration/Reuters

Europe has spent two years trying to adopt comprehensive AI regulation. The AI Act, first introduced by the European Commission in 2021, aspires to regulate AI models based on different risk categories.

The proposed law would ban dangerous models outright, such as those that might manipulate humans, and mandate strict oversight and transparency for powerful models that carry the risk of harm. For lower-risk models, the AI Act would require simple disclosures. In May, the European Parliament approved the legislation, but the three bodies of the European legislature are still in the middle of hammering out the final text. The makers of generative AI models, like the one powering ChatGPT, would have to submit to safety checks and publish summaries of the copyrighted material they’re trained on.

Bump in the road: Last week, France, Germany, and Italy dealt the AI Act a setback by reaching an agreement that supports “mandatory self-regulation through codes of conduct" for AI developers building so-called foundation models. These are the models that are trained on massive sets of data and can be used for a wide range of applications, including OpenAI’s GPT-4, the large language model that powers ChatGPT. This surprise deal represents a desire to bolster European AI firms at the expense of the effort to hold them legally accountable for their products.

The view of these countries, three of the most powerful in the EU, is that the application of AI should be regulated, not the technology itself, which is a departure from the EU’s existing plan to regulate foundation models. While the tri-country proposal would require developers to publish information about safety tests, it doesn’t demand penalties for withholding that information — though it suggests that sanctions could be introduced.

A group of tech companies, including Apple, Ericson, Google, and SAP signed a letter backing the proposal: “Let's not regulate [AI] out of existence before they get a chance to scale, or force them to leave," the group wrote.

But it angered European lawmakers who favor the AI Act. “This is a declaration of war," one member of the European Parliament told Politico, which suggested that this “power grab” could even end progress on the AI Act altogether. A fifth round of European trilogue discussions is set for Dec. 6, 2023.

EU regulators have grown hungry to regulate AI, a counter to the more laissez-faire approach of the United Kingdom under Prime Minister Rishi Sunak, whose recent Bletchley Declaration, signed by countries including the US and China, was widely considered nonbinding and light-touch. Now, the three largest economies in Europe, France, Germany, and Italy, have brought that thinking to EU negotiations — and they need to be appeased. Europe’s Big Three not only carry political weight but can form a blocking minority in the Council of Europe if there’s a vote, says Nick Reiners, senior geotechnology analyst at Eurasia Group.

Reiners says this thrown-wrench by the three makes it unlikely that the AI Act’s text will be agreed upon by the original target date of Dec. 6. But there is still strong political will on both sides, he says, to reach a compromise before next June’s European Parliament elections.

More from GZERO Media

- YouTube

Ian's Quick Take: The US Agency for International Development is in the process of being shut down. Nearly all Washington staff have been put on leave, they're closing missions abroad, the State Department moving to evacuate all staff around the world. Why should we care? Does this matter?

Stacked containers in American and Chinese national colors symbolize a trade war between the US and China.
IMAGO/Christian Ohde via Reuters Connect

China has retaliated against US President Donald Trump’s 10% tariffs with a range of strategic countermeasures, to take effect Feb. 10. Of greater interest are nontariff measures, including anti-monopoly investigations launched into Google and the placing of Calvin Klein’s parent company, PVH, on China’s “unreliable entities” list, limiting the brand’s operations there.

Tulsi Gabbard, Trump’s nominee for Director of National Intelligence, testifies at a Senate Intelligence Committee confirmation hearing in Washington, DC, January 30, 2025. She is a controversial nominee due to her previous suggestions of support for autocrats and U.S. adversaries, including the recently toppled Syrian dictator Bashar al-Assad.
(Photo by Allison Bailey/NurPhoto)
Bill Dally, Yann LeCun, and Yoshua Bengio, QEPrize, The Mall, London.
QEPrize/Jason Alden

On Tuesday, seven AI pioneers took home the 2025 Queen Elizabeth Prize for Engineering, a top award for groundbreaking innovations in science and engineering. They include Yoshua Bengio, Geoffrey Hinton, John Hopfield, Yann LeCun, Jensen Huang, Bill Dally, and Fei-Fei Li.

- YouTube

What does Putin mean when he says Europe "will stand at the feet of the master"? What's next for Panama after deciding to exit China's Belt and Road Initiative? How would a potential Turkey defense pact with Syria reshape power dynamics in the Middle East? Ian Bremmer shares his insights on global politics this week on World In :60.