Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Should AI content be protected as free speech?
Americans love free speech, and for all its flaws, the American government does take a lighter hand than many other major democracies. But even in the US, there are limits. So where does misinformation and fabricated imagery and audio generated by AI fit into free speech?
Eléonore Caroit, vice president of the French Parliament’s Foreign Affairs Committee, says she understands the sensitivities around taking down political speech in the US. "In the US, you have the First Amendment, which is so important that anything else could be seen as censorship,” she said, “Whereas, in France, I think we have a higher tolerance to some sort of regulation, which is not going to be seen as censorship as it would in the US.”
Caroit spoke at a GZERO Global Stage discussion with Ian Bremmer, President and Founder, Eurasia Group & GZERO Media, Rappler CEO Maria Ressa, and Microsoft Vice Chair and President Brad Smith, moderated by Julien Pain, journalist and host of Franceinfo, live from the 2023 Paris Peace Forum.
Where should the line fall between free speech and censorship? If someone, say, publishes a campaign video made with AI that showed misleading images of immigrants rioting to call for hardcore migration policy, would the government be within its rights to force tech companies to remove it from their platforms? Caroit mentioned that in fact, France removed two videos from far-right presidential candidate Eric Zemmour made with AI, and could have struck down his entire YouTube channel if he published a third.
But to rein in out-of-control AI, which can generate mountains of text, audio, and video at the click of a mouse, countries of all persuasions on free speech — even those decidedly “anti” — will need to come to an agreement on basic rules of the road.
Watch the full livestream panel discussion: "Live from the Paris Peace Forum: Embracing technology to protect democracy"
The livestream was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
- Rishi Sunak's first-ever UK AI Safety Summit: What to expect ›
- How the EU designed the new iPhone ›
- Regulate AI: Sure, but how? ›
- AI and data regulation in 2023 play a key role in democracy ›
- Ian Bremmer: How AI may destroy democracy ›
- The UN takes on AI ›
- Did the US steal the UK’s AI thunder? ›
- Paris Peace Forum Director General Justin Vaïsse: Finding common ground - GZERO Media ›
- At the Paris Peace Forum, grassroots activists highlight urgent issues - GZERO Media ›
How AI and deepfakes are being used for malicious reasons
What could someone do to you with an entirely false audio clip that sounds just like you?
Could they damage your relationships? Sure. Scam your loved ones? Probably. Hurt your career or even destroy your public reputation entirely? That’s what nearly happened to one pro-Ukrainian activist in Canada, according to Microsoft Vice Chair and President Brad Smith.
Russian agents “created a fake audio of him saying something he never said. They then took a legitimate real broadcast of the CBC, the Canadian Broadcasting Corporation, and they spliced into that this deepfake audio,” Smith explained. “This is the recipe that will probably be followed, take legitimate things and insert forgery within it.”
Smith spoke at a GZERO Global Stage discussion with Ian Bremmer, President and Founder, Eurasia Group & GZERO Media, Eléonore Caroit, Vice-President of the French Parliament’s Foreign Affairs Committee, Rappler CEO Maria Ressa, moderated by Julien Pain, journalist and host of Franceinfo, live from the 2023 Paris Peace Forum.
Watch the full livestream panel discussion: "Live from the Paris Peace Forum: Embracing technology to protect democracy"
The livestream was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
- Be very scared of AI + social media in politics ›
- The UN takes on AI ›
- Did the US steal the UK’s AI thunder? ›
- Paris Peace Forum Director General Justin Vaïsse: Finding common ground - GZERO Media ›
- AI for all: Leave no one behind, says Microsoft's Brad Smith - GZERO Media ›
- Deepfakes are ‘fraud,’ says Microsoft CEO Brad Smith - GZERO Media ›
- AI vs. truth: Battling deepfakes amid 2024 elections - GZERO Media ›
- UN's Rebeca Grynspan on the world’s debt crisis: Can it be solved? - GZERO Media ›
How AI will roil politics even if it creates more jobs
Whether artificial intelligence will ultimately be good or bad for humanity is an open debate. But there’s another, more immediate issue that often gets lost in the scrum: Even if AI eventually creates more jobs and opportunities than it destroys, what happens to the actual people who lose their jobs on the way to that happier future? And how might their grievances shape politics in the meantime?
Think of the people who once earned a living by building or driving horse-drawn carriages. In a matter of years at the beginning of the last century, railroads and the nascent automotive industry erased their livelihoods entirely. Yes, those sectors ended up creating vastly more jobs than they killed, but it was tough luck for those in the buggy industry who weren’t able to learn new skills or move to those new jobs in time.
The same goes for US manufacturing workers whose jobs were shipped off to China or Mexico in the 1980s and 1990s. Or today’s Bangladeshi garment workers, who are threatened by US robots that can now make textiles better and faster than humans can. Although globalization and offshoring increased most people’s standards of living globally, that’s cold comfort for the people who were left jobless as a result.
And so it is today with AI. Between now and the time when AI is fully and beneficially integrated into all aspects of many existing (and new) jobs, a lot of people are going to lose their work, and will quickly find themselves in a sink-or-swim situation that forces them to learn new skills, fast. Can they?
We don’t know who those people are just yet. White-collar workers like coders, paralegals, financial analysts and traders, or (gulp!) journalists and creatives? Call center workers in emerging markets like the Philippines, where the industry accounts for as much as 7% of GDP?
But they will have real grievances and powerful platforms that can disrupt politics quickly. Whoever gets edged out by AI, the backlash will be fierce — and political. Social media offers a megaphone that buggy drivers in the 1880s or steel workers a century later could scarcely have dreamed of.
What’s more, AI threatens folks who are already in positions of relative power in many wealthy nations. Imagine an “Occupy Wall Street” style movement against AI led by, well, Wall Street itself.
The political ramifications will be significant. Consider the ways in which Donald Trump’s historic 2016 campaign weaponized the resentment of people who felt left behind by outsourcing and automation.
Displacement by AI will create similar grievances that policymakers will either have to head off through accelerated job retraining or redress through expanded social safety nets for those left behind. Who will AI’s victims vote for in the future?