Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Should AI content be protected as free speech?
Americans love free speech, and for all its flaws, the American government does take a lighter hand than many other major democracies. But even in the US, there are limits. So where does misinformation and fabricated imagery and audio generated by AI fit into free speech?
Eléonore Caroit, vice president of the French Parliament’s Foreign Affairs Committee, says she understands the sensitivities around taking down political speech in the US. "In the US, you have the First Amendment, which is so important that anything else could be seen as censorship,” she said, “Whereas, in France, I think we have a higher tolerance to some sort of regulation, which is not going to be seen as censorship as it would in the US.”
Caroit spoke at a GZERO Global Stage discussion with Ian Bremmer, President and Founder, Eurasia Group & GZERO Media, Rappler CEO Maria Ressa, and Microsoft Vice Chair and President Brad Smith, moderated by Julien Pain, journalist and host of Franceinfo, live from the 2023 Paris Peace Forum.
Where should the line fall between free speech and censorship? If someone, say, publishes a campaign video made with AI that showed misleading images of immigrants rioting to call for hardcore migration policy, would the government be within its rights to force tech companies to remove it from their platforms? Caroit mentioned that in fact, France removed two videos from far-right presidential candidate Eric Zemmour made with AI, and could have struck down his entire YouTube channel if he published a third.
But to rein in out-of-control AI, which can generate mountains of text, audio, and video at the click of a mouse, countries of all persuasions on free speech — even those decidedly “anti” — will need to come to an agreement on basic rules of the road.
Watch the full livestream panel discussion: "Live from the Paris Peace Forum: Embracing technology to protect democracy"
The livestream was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
- Rishi Sunak's first-ever UK AI Safety Summit: What to expect ›
- How the EU designed the new iPhone ›
- Regulate AI: Sure, but how? ›
- AI and data regulation in 2023 play a key role in democracy ›
- Ian Bremmer: How AI may destroy democracy ›
- The UN takes on AI ›
- Did the US steal the UK’s AI thunder? ›
- Paris Peace Forum Director General Justin Vaïsse: Finding common ground - GZERO Media ›
- At the Paris Peace Forum, grassroots activists highlight urgent issues - GZERO Media ›
How are emerging technologies helping to shape democracy?
How do you know that what you are seeing, hearing, and reading is real?
It’s not an abstract question: Artificial intelligence technology allows anyone with an internet connection and a half-decent laptop to fabricate entirely fictitious video, audio, and text and spread it around the world in the blink of an eye.
The media may be ephemeral, but the threat to governments, journalists, corporations, and you yourself is here to stay. That’s what Julien Pain, journalist and host of Franceinfo, tried to get at during the GZERO Global Stage discussion he moderated live from the 2023 Paris Peace Forum.
In response to a poll that showed 77% of the GZERO audience felt democracies are weakening, Eléonore Caroit, vice president of the French Parliament’s Foreign Affairs Committee, pointed out that the more alarming part is many people around the globe are sufficiently frightened to trade away democratic liberties for the purported stability of unfree governments — a trend authoritarian regimes exploit using AI.
“Democracy is getting weaker, but what does that provoke in you?” she asked. “Do you feel protected in an undemocratic regime? Because that is what worries me, not just that democracy is getting weaker but that fewer people seem to care about it.”
Ian Bremmer, president and founder of the Eurasia Group and GZERO Media, said a lot of that fear stems from an inability to know what to trust or even what is real as fabricated media pervades the internet. The very openness that democratic societies hold as the keystone of their civic structures exacerbates the problem.
“Authoritarian states can tell their citizens what to believe. People know what to believe, the space is made very clear, there are penalties for not believing those things,” Bremmer explained. “In democracies, you increasingly don’t know what to believe. What you believe has become tribalized and makes you insecure.”
Rappler CEO Maria Ressa, who is risking a century-long prison sentence to fight state suppression of the free press in the Philippines, called information chaos in democracies the “core” of the threat.
“Technology has taken over as the gatekeeper to the public sphere,” she said “They have abdicated responsibility when lies spread six times faster than the truth” on social media platforms.
Microsoft vice chair and president Brad Smith offered a poignant example from Canada, in which a pro-Ukraine activist was targeted by Russia with AI-generated audio of a completely fabricated statement. They spliced it into a real TV broadcast and spread the clip across social media to discredit the activist’s work of years within minutes.
The good news, Smith said, is that AI can also be used to help fight disinformation campaigns.
“AI is an extraordinarily powerful tool to identify patterns within data,” he said. “For example, after the fire in Lahaina, we detected the Chinese using an influence network of more than a hundred influencers — all saying the same thing at the same time in more than 30 different languages” to spread a conspiracy theory that the US government deliberately started the blaze.”
All the panelists agreed on one crucial next step: aligning all the stakeholders — many with competing interests and a paucity of mutual trust — to create basic rules of the road on AI and how to punish its misuse, which will help ordinary people rebuild trust and feel safer.
The livestream was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
- Stop misinformation blame game — let's do something about it ›
- Christchurch Call had a global impact on tech giants - Microsoft's Brad Smith ›
- What does democracy look like in Modi's India? ›
- Ian Bremmer: How AI may destroy democracy ›
- AI, election integrity, and authoritarianism: Insights from Maria Ressa - GZERO Media ›
- Stop AI disinformation with laws & lawyers: Ian Bremmer & Maria Ressa - GZERO Media ›
- How AI threatens elections - GZERO Media ›
- Paris Peace Forum Director General Justin Vaïsse: Finding common ground - GZERO Media ›
- At the Paris Peace Forum, grassroots activists highlight urgent issues - GZERO Media ›
- UN's Rebeca Grynspan on the world’s debt crisis: Can it be solved? - GZERO Media ›