Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
How the UN is combating disinformation in the age of AI
Disinformation is running rampant in today’s world. The internet, social media, and AI — combined with declining trust in major institutions — have created an ecosystem ripe for exploitation by nefarious actors aiming to spread false and hateful narratives. Meanwhile, governments worldwide are struggling to get big tech companies to take substantive steps to combat disinformation. And at the global level, the UN’s priorities are also being hit hard by these trends.
“We can't bring about and generate stability in fragile environments if populations are turning against our peacekeepers as a result of lies being spread against them online. We can't make progress on climate change if people are being led to believe first of all, that maybe it doesn't even exist, or that it's not as bad as they thought, or that it's actually too late and there's nothing that they can do about it,” Melissa Fleming, the UN's Under-Secretary-General for Global Communications, told GZERO in a conversation at the SDG Media Zone during the 79th UN General Assembly.
“The UN alone cannot tackle these problems without civil society, without people. And the people are what drives political agendas. So it's really important for us to work on our information ecosystems together,” Fleming added.
Though Fleming said that many in the UN are excited by AI's myriad potential benefits, she also emphasized the serious problems it’s already posing in terms of accelerating the spread of disinformation—particularly via deepfakes.
“We've spent a lot of time also trying to educate the public on how to spot misinformation and disinformation and how to tell if a photo is real or if it is fake. In the AI information age, that's going to become nearly impossible,” Fleming said.
“So we're calling on AI actors to really create safety by design, and don't leave it only to the users to be able to try to figure out how to navigate this. They are designing these instruments, and they can be part of the solution,” she added.
Old MacDonald had a Russian bot farm
On July 9, the US Department of Justice announced it disrupted a Russian bot farm that was actively using generative AI to spread disinformation worldwide. The department seized two domain names and probed 1,000 social media accounts on X (formerly known as Twitter) in collaboration with the FBI as well as Canadian and Dutch authorities. X voluntarily suspended the accounts, the government said.
The Kremlin-approved effort, which has been active since at least 2022, was spearheaded by an unnamed editor at RT, the Russia state-run media outlet, who created fake social media personas and posted pro-Putin and anti-Ukraine sentiments on X. It’s unclear which AI tools were used to generate the social media posts.
“Today’s actions represent a first in disrupting a Russian-sponsored Generative AI-enhanced social media bot farm,” FBI Director Christopher Wray wrote in a statement. Wray said that Russia intended to use this bot farm to undermine allies of Ukraine and “influence geopolitical narratives favorable to the Russian government.”
Russia has long tried to sow chaos online in the United States, but the Justice Department’s latest action signals that it’s ready to intercept inorganic social media activity — especially when it’s supercharged with AI.
Battle of the bots: Trump trial
Talk about courting attention. Former President Donald Trump’s guilty verdict in his hush money trial on 34 felony counts captured the public’s imagination – some to rejoice, others to reject – and much of the debate played out on X, formerly known as Twitter.
But, dearest gentle reader, we humans were not alone. Internet bots also immediately got to work to manipulate the online conversation. As a part of our ongoing investigation into how disinformation is affecting the 2024 election and US democracy, we partnered with Cyabra, a disinformation detection firm, to investigate how fake profiles online responded to the Trump trial.
After analyzing 22,000 pieces of trial-related content, Cyabra found that 17% came from fake accounts. While real people made up the majority of posts, 55% of the inauthentic posts were aimed at discrediting the US justice system and portraying Trump as a victim of a biased system.
Regardless of how one feels about Trump’s criminality, posts like these further endanger voters’ faith in institutions at a time when trust in them is already at an all-time low. Plummeting trust in institutions is also fueling conspiracy theories. To learn about the theories with the biggest influence on the 2024 election, check out GZERO’s new immersive project here.
Battle of the bots: AOC under attack
GZERO teamed up with Cyabra, a disinformation detection firm, to investigate how fake actors on the internet could be shaping interactions with AOC’s posts.
They found that 27% of responses to her X posts condemning US involvement in Israel’s Gaza operations and Columbia University’s use of police against protesters were from fake accounts.
The most common words used by the fake accounts were “Hamas” and “terrorist,” and their comments were usually accusing the congresswoman of sympathizing with terrorists or inciting violence. Many also compared the student protests to the Jan. 6 riots, proposing that there was a double standard to the protesters’ political agenda.
AI election safeguards aren’t great
The British nonprofit used Midjourney, OpenAI's ChatGPT, Stability.ai's DreamStudio, and Microsoft's Image Creator for testing in February, simply tying in different text prompts related to the US elections. The group was able to bypass the tools’ protections a whopping 41% of the time.
Some of the images they created showed Donald Trump being taken away in handcuffs, Trump on a plane with alleged pedophile and human trafficker Jeffrey Epstein, and Joe Biden in a hospital bed.
Generative AI is already playing a tangible role in political campaigns, especially as voters go to the polls for national elections in 64 different countries this year. AI has been used to help a former prime minister get his message out from prison in Pakistan, to turn a hardened defense minister into a cuddly character in Indonesia, and to impersonate US President Biden in New Hampshire. Protections that fail nearly half the time just won’t cut it. With regulation lagging behind the pace of technology, AI companies have made voluntary commitments to prevent the creation and spread of election-related AI media.
“All of these tools are vulnerable to people attempting to generate images that could be used to support claims of a stolen election or could be used to discourage people from going to polling places," CCDH’s Callum Hood told the BBC. “If there is will on the part of the AI companies, they can introduce safeguards that work.”
Tracking anti-Navalny bot armies
In an exclusive investigation into online disinformation surrounding online reaction to Alexei Navalny's death, GZERO asks whether it is possible to track the birth of a bot army. Was Navalny's tragic death accompanied by a massive online propaganda campaign? We investigated, with the help of a company called Cyabra.
Alexei Navalny knew he was a dead man the moment he returned to Moscow in January 2021. Vladimir Putin had already tried to kill him with the nerve agent Novichok, and he was sent to Germany for treatment. The poison is one of Putin’s signatures, like pushing opponents out of windows or shooting them in the street. Navalny knew Putin would try again.
Still, he came home.
“If your beliefs are worth something,” Navalny wrote on Facebook, “you must be willing to stand up for them. And if necessary, make some sacrifices.”
He made the ultimate sacrifice on Feb. 16, when Russian authorities announced, with Arctic banality, that he had “died” at the IK-3 penal colony more than 1,200 miles north of Moscow. A frozen gulag. “Convict Navalny A.A. felt unwell after a walk, almost immediately losing consciousness,” they announced as if quoting a passage from Koestler’s “Darkness at Noon.” Later, deploying the pitch-black doublespeak of all dictators, they decided to call it, “sudden death syndrome.”
Worth noting: Navalny was filmed the day before, looking well. There is no body for his wife and two kids to see. No autopsy.
As we wrote this morning, Putin is winning on all fronts. Sensing NATO support for the war in Ukraine is wavering – over to you, US Congress – Putin is acting with confident impunity. His army is gaining ground in Ukraine. He scored a propaganda coup when he toyed with dictator-fanboy Tucker Carlson during his two-hour PR session thinly camouflaged as an “interview.” And just days after Navalny was declared dead, the Russian pilot Maksim Kuzminov, who defected to Ukraine with his helicopter last August, was gunned down in Spain.
And then, of course, there is the disinformation war, another Putin battleground. Navalny’s death got me wondering if there would be an orchestrated disinformation campaign around the event, and if so, whether there was any way to track it? Would there be, say, an online release of shock bot troops to combat Western condemnation of Navalny’s death and blunt the blowback?
It turns out there was.
To investigate, GZERO asked the “social threat information company” Cyabra, which specializes in tracking bots, to look for disinformation surrounding the online reactions to the news about Navalny. The Israeli company says its job is to uncover “threats” on social platforms. It has built AI-driven software to track “attacks such as impersonation, data leakage, and online executive perils as they occur.”
Cyabra’s team focused on the tweets President Joe Bidenand Prime Minister Justin Trudeau posted condemning Navalny’s death. Their software analyzed the number of bots that targeted these official accounts. And what they found was fascinating.
According to Cyabra, “29% of the Twitter profiles interacting with Biden’s post about Navalny on X were identified as inauthentic.” For Trudeau, the number was 25%.
Courtesy of Cyabra
So, according to Cyabra, more than a quarter of the reaction you saw on X related to Navalny’s death and these two leaders’ reactions came from bots, not humans. In other words, a bullshit campaign of misinformation.
This finding raises a lot of questions. What’s the baseline of corruption to get a good sense of comparison? For example, is 27% bot traffic on Biden’s tweet about Navalny’s death a lot, or is everything on social media flooded with the same amount of crap? How does Cyabra's team actually track bots, and how accurate is their data? Are they missing bots that are well-disguised, or, on the other side, are some humans being labeled as “inauthentic”? In short, what does this really tell us?
In the year of elections, with multiple wars festering and AI galloping ahead of regulation, the battle against disinformation and bots is more consequential than ever. The bot armies of the night are marching. We need to find a torch to see where they are and if there are any tools that can help us separate fact from fiction.
Tracking bot armies is a job that often happens in the shadows, and it comes with a lot of challenges. Can this be done without violating people’s privacy? How hard is this to combat? I spoke with the CEO of Cyabra, Dan Brahmy, to get his view.
Solomon: When Cyabra tracked the reactions to the tweets from President Joe Biden and Prime Minister Trudeau about the “death” of Navalny, you found more than 25% of the accounts were inauthentic. What does this tell us about social media and what people can actually trust is real?
Brahmy: From elections to sporting events to other significant international headline events, social media is often the destination for millions of people to follow the news and share their opinion. Consequently, it is also the venue of choice for malicious actors to manipulate the narrative.
This was also the case when Cyabra looked into President Biden and Prime Minister Trudeau’s X post directly blaming Putin for Navalny’s death. These posts turned out to be the ideal playing ground for narrative-manipulating bots. Inauthentic accounts on a large scale attacked Biden and Trudeau and blamed them for their foreign and domestic policies while attempting to divert attention from Putin and the negative narrative surrounding him.
The high number of fake accounts detected by Cyabra, together with the speed at which those accounts engaged in the conversation to divert and distract following the announcement of Navalny’s death, shows the capabilities of malicious actors and their intentions to conduct sophisticated influence operations.
Solomon: Can you tell where these are from and who is doing it?
Brahmy: Cyabra monitors for publicly available information on social media and does not track IP addresses or any private information. The publicly shared location of the account is collected by Cyabra. When analyzing the Navalny conversation, Cyabra saw that the majority of the accounts claimed themselves as coming from the US.
Solomon: There is always the benchmark question: How much “bot” traffic or inauthentic traffic do you expect at any time, for any online event? Put the numbers we see here for Trudeau and Biden in perspective.
Brahmy: The average percentage of fake accounts participating in an everyday conversation online typically varies between 4 and 8%. Cyabra’s discovery of 25-29% fake accounts related to this conversation is alarming, significant, and should give us cause for concern.
Solomon: Ok, then there is the accuracy question. How do you actually identify a bot and how do you know, given the sophistication of AI and new bots, that you are not missing a lot of them? Is it easier to find “obvious bots”— i.e., something that tweets every two minutes 24 hours a day, then say, a series of bots that look and act very human?
Brahmy: Using advanced AI and machine learning, Cyabra analyzes a profile’s activity and interactions to determine if it demonstrates non-human behaviors. Cyabra’s proprietary algorithm consists of over 500 behavioral parameters. Some parameters are more intuitive, like the use of multiple languages, while others require in-depth expertise and advanced machine learning. Cyabra’s technology works at scale and in almost real-time.
Solomon: There is so much disinformation anyway – actual people who lie, mislead, falsify, scam – how much does this matter?
Brahmy: The creation and activities of fake accounts on social media (whether it be a bot, sock puppet, troll, or otherwise) should be treated with the utmost seriousness. Fake accounts are almost exclusively created for nefarious purposes. By identifying inauthentic profiles and then analyzing their behaviors and the false narratives they are spreading, we can understand the intentions of malicious actors and remedy them as a society.
While we all understand that the challenge of disinformation is pervasive and a threat to society, being able to conduct the equivalent of an online CT scan reveals the areas that most urgently need our attention.
Solomon: Why does it matter in a big election year?
Brahmy: More than 4 billion people globally are eligible to vote in 2024, with over 50 countries holding elections. That’s 40% of the world’s population. Particularly during an election year, tracking disinformation is important – from protecting the democratic process, ensuring informed decision-making, preventing foreign interference, and promoting transparency, to protecting national security. By tracking and educating the public on the prevalence of inauthentic accounts, we slowly move closer to creating a digital environment that fosters informed, constructive, and authentic discourse.
You can check out part of the Cybara report here.
- Understanding Navalny’s legacy inside Russia ›
- Navalny’s widow continues his fight for freedom ›
- “A film is a weapon on time delay” — an interview with “Navalny” director Daniel Roher ›
- Navalny's death is a huge loss for democracy - NATO's Mircea Geona ›
- Alexei Navalny's death: A deep tragedy for Russia ›
- Navalny's death is a message to the West ›
- Navalny’s death: Five things to know ›
Al Gore's take on American democracy, climate action, and "artificial insanity"
Listen: In this episode of GZERO World podcast, Ian Bremmer sits down with former US Vice President Al Gore on the sidelines of Davos in Switzerland. Gore, an individual well-versed in navigating contested elections, shared his perspectives on the current landscape of American politics and, naturally, his renowned contributions to climate action.
While the mainstage discussions at the World Economic Forum throughout the week delved into topics such as artificial intelligence, conflicts in Ukraine and the Middle East, and climate change, behind the scenes, much of the discourse was centered on profound concerns about the upcoming 2024 US election and the state of American democracy. The US presidential election presents substantial risks, particularly with Donald Trump on the path to securing the GOP nomination.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
- Podcast: Can the US get its act together? Susan Glasser & Peter Baker on "the world’s greatest geopolitical crisis" ›
- America vs itself: Political scientist Francis Fukuyama on the state of democracy ›
- Divided we fall: Democracy at risk in the US ›
- Francis Fukuyama: Americans should be very worried about failing democracy ›
- Al Gore: "Artificial insanity" threatens democracy ›
- Ian Bremmer: How AI may destroy democracy ›
- Trump's immunity claim: US democracy in crisis ›
Azeem Azhar explores the future of AI
AI was all the rage at Davos this year – and for good reason. As we’ve covered each week in our weekly GZERO AI newsletter, artificial intelligence is impacting everything from regulatory debates and legal norms to climate change, disinformation, and identity theft. GZERO Media caught up with Azeem Azhar, founder of Exponential View, an author and analyst, and a GZERO AI guest columnist, for his insights on the many issues facing the industry.
GZERO: Whether The New York Times’ lawsuit against OpenAI on copyright grounds is settled, or found for or against OpenAI, do you think large language models are less feasible in the long term?
Azeem Azhar: Copyright has always been a compromise. The compromise has been between how many rights should be afforded to creators, and ultimately, of course, what that really means is the big publishers who accumulate them and have the legal teams.
And harm is being done to research, free exchange of knowledge, cultural expression by creating these enclosures around our intellectual space. This compromise, which worked reasonably well perhaps 100 years ago doesn't really work that well right now.
And now we have to say, “Well, we've got this new technology that could provide incredibly wide human welfare and when copyright was first imagined, those were not the fundamental axioms of the world.”
GZERO: Can you give me an example of something that could be attained by reforming copyright laws?
Azhar: Take Zambia. Zambia doesn't have very many doctors per capita. And because they don't have many doctors, they can't train many doctors. So you could imagine a situation where you can have widespread personalized AI tutoring to improve primary, secondary, tertiary, and educational outcomes for billions of people.
And those will use large language models dependent on a vast variety of material that will fall under the sort of traditional frame of copyright.
GZERO: AI is great at finding places to be more efficient. Do you think there's a future in which AI is used to decrease the world's net per capita energy consumption?
Azhar: No, we won't decrease energy consumption because energy is health and energy is prosperity and energy is welfare. Over the next 30 years, energy use will grow higher and at a higher rate than it has over the last 30, and at the same time, we will entirely decarbonize our economy.
Effectively, you cannot find any countries that don't use lots of energy that you would want to live in and that are safe and have good human outcomes.
But how can AI help? Well, look at an example from DeepMind. DeepMind released this thing called GNoME at the end of last year, which helps identify thermodynamically stable materials.
And DeepMind’s system delivered 60 years of stable producible materials with their physical properties in just one shot. Now that's really important because a lot of the climate transition and the materiality question is about how we produce all the stuff for your iPods and your door frames and your water pipes in ways that are thermodynamically more efficient, and that's going to require new materials and so AI can absolutely help us do that.
GZERO: In 2024, we are facing over four dozen national-level elections in a completely changed disinformation environment. Are you more bullish or bearish on how governments might handle the challenge of AI-driven disinformation?
Azhar: It does take time for bad actors to actually make use of these technologies, so I don't think that deep fake video will significantly play a role this year because it's just a little bit too soon.
But distribution of disinformation, particularly through social media, matters a great deal and so too do the capacities and the behaviors of the media entities and the political class.
If you remember in Gaza, there was an explosion at a hospital, and one of the newswires reported immediately that 500 people had been killed and they reported this within a few minutes. There's no way that within a few minutes one can count 500 bodies. But other organizations then picked it up, who are normally quite reputable.
That wasn't AI-driven disinformation. The trouble is the lie travels halfway around the world before the truth gets its trousers on. Do media companies need to put up a verification unit as the goalkeeper? Or do you put the idea of defending the truth and veracity and factuality throughout the culture of the organization?
GZERO: You made me think of an app that's become very popular in Taiwan over the last few months called Auntie Meiyu, which allows you to take a big group chat, maybe a family chat for example, and then you add Auntie Meiyu as a chatbot. And when Grandpa sends some crazy article, Auntie Meiyu jumps in and says, “Hey, this is BS and here’s why.”
She’s not preventing you from reading it. She's just giving you some additional information, and it's coming from a third party, so no family member has to take the blame for making Grandpa feel foolish.
Azhar: That is absolutely brilliant because, when you look back at the data from the US 2016 election, it wasn't the Instagram, TikTok, YouTube teens who were likely to be core spreaders of political misinformation. It was the over-60s, and I can testify to that with some of my experience with my extended family as well.
GZERO: As individuals are thinking about risks that AI might pose to them – elderly relatives being scammed or someone generating fake nude images of real people – is there anything an individual can do to protect themselves from some of the risks that AI might pose to their reputation or their finances?
Azhar: Wow, that's a really hard question. Have really nice friends.
I am much more careful now than I was five years ago and I'm still vulnerable. When I have to make transactions and payments I will always verify by doing my own outbound call to a number that I can verify through a couple of other sources.
I very rarely click on links that are sent to me. I try to double-check when things come in, but this is, to be honest, just classic infosec hygiene that everyone should have.
With my elderly relatives, the general rule is you don't do anything with your bank account ever unless you've got one of your kids with you. Because we’ve found ourselves, all of us, in the digital equivalent of that Daniel Day-Lewis film “Gangs of New York,” where there are a lot of hoodlums running around.