Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
EU AI regulation efforts hit a snag
Europe has spent two years trying to adopt comprehensive AI regulation. The AI Act, first introduced by the European Commission in 2021, aspires to regulate AI models based on different risk categories.
The proposed law would ban dangerous models outright, such as those that might manipulate humans, and mandate strict oversight and transparency for powerful models that carry the risk of harm. For lower-risk models, the AI Act would require simple disclosures. In May, the European Parliament approved the legislation, but the three bodies of the European legislature are still in the middle of hammering out the final text. The makers of generative AI models, like the one powering ChatGPT, would have to submit to safety checks and publish summaries of the copyrighted material they’re trained on.
Bump in the road: Last week, France, Germany, and Italy dealt the AI Act a setback by reaching an agreement that supports “mandatory self-regulation through codes of conduct" for AI developers building so-called foundation models. These are the models that are trained on massive sets of data and can be used for a wide range of applications, including OpenAI’s GPT-4, the large language model that powers ChatGPT. This surprise deal represents a desire to bolster European AI firms at the expense of the effort to hold them legally accountable for their products.
The view of these countries, three of the most powerful in the EU, is that the application of AI should be regulated, not the technology itself, which is a departure from the EU’s existing plan to regulate foundation models. While the tri-country proposal would require developers to publish information about safety tests, it doesn’t demand penalties for withholding that information — though it suggests that sanctions could be introduced.
A group of tech companies, including Apple, Ericson, Google, and SAP signed a letter backing the proposal: “Let's not regulate [AI] out of existence before they get a chance to scale, or force them to leave," the group wrote.
But it angered European lawmakers who favor the AI Act. “This is a declaration of war," one member of the European Parliament told Politico, which suggested that this “power grab” could even end progress on the AI Act altogether. A fifth round of European trilogue discussions is set for Dec. 6, 2023.
EU regulators have grown hungry to regulate AI, a counter to the more laissez-faire approach of the United Kingdom under Prime Minister Rishi Sunak, whose recent Bletchley Declaration, signed by countries including the US and China, was widely considered nonbinding and light-touch. Now, the three largest economies in Europe, France, Germany, and Italy, have brought that thinking to EU negotiations — and they need to be appeased. Europe’s Big Three not only carry political weight but can form a blocking minority in the Council of Europe if there’s a vote, says Nick Reiners, senior geotechnology analyst at Eurasia Group.
Reiners says this thrown-wrench by the three makes it unlikely that the AI Act’s text will be agreed upon by the original target date of Dec. 6. But there is still strong political will on both sides, he says, to reach a compromise before next June’s European Parliament elections.
- Singapore sets an example on AI governance - GZERO Media ›
- AI and Canada's proposed Online Harms Act - GZERO Media ›
- Should we regulate generative AI with open or closed models? - GZERO Media ›
- OpenAI is risk-testing Voice Engine, but the risks are clear - GZERO Media ›
- Ian Bremmer: On AI regulation, governments must step up to protect our social fabric - GZERO Media ›
Hard Numbers: A soured stock sale, a European agreement, copyright complaints, and a secretive summit
$86 billion: Sam Altman’s ouster from OpenAI calls into question an employee stock sale that would have valued the company at $86 billion. The sale was supposed to close as early as next month, according to The Information. With Altman’s departure and the expected mass exodus of OpenAI staff, possibly to Microsoft, expect that valuation to take a serious hit — if the stock sale happens at all. Microsoft stocks, meanwhile, reached a record-high close on Monday.
3: Three major European countries have come to an agreement about how AI should be regulated. France, Germany, and Italy have agreed to "mandatory self-regulation through codes of conduct," but without any punitive sanctions, at least for now. The move will further weaken European efforts to pass the Artificial Intelligence Act owing to disagreements over how strenuously to regulate the technology.
10,000: Shira Perlmutter, the US register of copyrights, the country’s top copyright official, said her office has received 10,000 comments about artificial intelligence in recent months. Artists have urged federal officials like Perlmutter to take a stance against AI for fear it’s innately violative. Meanwhile, a litany of lawsuits alleging copyright violations are making their way through federal courts.
100: More than 100 people gathered last week in the mountains of Utah for an invite-only conference called the AI Security Summit. Bloomberg called the event “secretive” but reported that speakers included multiple OpenAI executives — this was just days before the Altman ouster — as well as multiple US military officials. Among the topics discussed were Biden’s AI executive order, the threat from China, and the state of the semiconductor industry.