Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
How the UN is combating disinformation in the age of AI
Disinformation is running rampant in today’s world. The internet, social media, and AI — combined with declining trust in major institutions — have created an ecosystem ripe for exploitation by nefarious actors aiming to spread false and hateful narratives. Meanwhile, governments worldwide are struggling to get big tech companies to take substantive steps to combat disinformation. And at the global level, the UN’s priorities are also being hit hard by these trends.
“We can't bring about and generate stability in fragile environments if populations are turning against our peacekeepers as a result of lies being spread against them online. We can't make progress on climate change if people are being led to believe first of all, that maybe it doesn't even exist, or that it's not as bad as they thought, or that it's actually too late and there's nothing that they can do about it,” Melissa Fleming, the UN's Under-Secretary-General for Global Communications, told GZERO in a conversation at the SDG Media Zone during the 79th UN General Assembly.
“The UN alone cannot tackle these problems without civil society, without people. And the people are what drives political agendas. So it's really important for us to work on our information ecosystems together,” Fleming added.
Though Fleming said that many in the UN are excited by AI's myriad potential benefits, she also emphasized the serious problems it’s already posing in terms of accelerating the spread of disinformation—particularly via deepfakes.
“We've spent a lot of time also trying to educate the public on how to spot misinformation and disinformation and how to tell if a photo is real or if it is fake. In the AI information age, that's going to become nearly impossible,” Fleming said.
“So we're calling on AI actors to really create safety by design, and don't leave it only to the users to be able to try to figure out how to navigate this. They are designing these instruments, and they can be part of the solution,” she added.
AI election safeguards aren’t great
The British nonprofit used Midjourney, OpenAI's ChatGPT, Stability.ai's DreamStudio, and Microsoft's Image Creator for testing in February, simply tying in different text prompts related to the US elections. The group was able to bypass the tools’ protections a whopping 41% of the time.
Some of the images they created showed Donald Trump being taken away in handcuffs, Trump on a plane with alleged pedophile and human trafficker Jeffrey Epstein, and Joe Biden in a hospital bed.
Generative AI is already playing a tangible role in political campaigns, especially as voters go to the polls for national elections in 64 different countries this year. AI has been used to help a former prime minister get his message out from prison in Pakistan, to turn a hardened defense minister into a cuddly character in Indonesia, and to impersonate US President Biden in New Hampshire. Protections that fail nearly half the time just won’t cut it. With regulation lagging behind the pace of technology, AI companies have made voluntary commitments to prevent the creation and spread of election-related AI media.
“All of these tools are vulnerable to people attempting to generate images that could be used to support claims of a stolen election or could be used to discourage people from going to polling places," CCDH’s Callum Hood told the BBC. “If there is will on the part of the AI companies, they can introduce safeguards that work.”
Stop AI disinformation with laws & lawyers: Ian Bremmer & Maria Ressa
How do you keep guardrails on AI? “In the United States, historically, we don't respond with censorship. We respond with lawyers,” said Ian Bremmer, President and Founder of the Eurasia Group & GZERO Media, speaking in a GZERO Global Stage discussion live from the 2023 Paris Peace Forum.
Setting up basic legal structures around artificial intelligence is the first step toward building an infrastructure of accountability that can keep the technology from doing at least as much harm as good.
The European Union has an early lead in setting up systems, but Rappler CEO Maria Ressa said, “the EU is winning the race of the turtles” as the entire globe lags far behind the pace of technological advancement. Without legal structures and a healthy free press and civic society in place, democracies will struggle to remain resilient to the threats of AI-generated disinformation.
The livestream was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
- How are emerging technologies helping to shape democracy? ›
- Podcast: Artificial intelligence new rules: Ian Bremmer and Mustafa Suleyman explain the AI power paradox ›
- The AI power paradox: Rules for AI's power ›
- AI and data regulation in 2023 play a key role in democracy ›
- Ian Bremmer: How AI may destroy democracy ›
- Paris Peace Forum Director General Justin Vaïsse: Finding common ground - GZERO Media ›
- At the Paris Peace Forum, grassroots activists highlight urgent issues - GZERO Media ›
- AI is an opportunity to build trust with the Global South: UN's Amandeep Singh Gill - GZERO Media ›
- Ian Bremmer: On AI regulation, governments must step up to protect our social fabric - GZERO Media ›