Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Rebuilding post-election trust in the age of AI
In a GZERO Global Stage discussion at the 7th annual Paris Peace Forum, Teresa Hutson, Corporate Vice President at Microsoft, reflected on the anticipated impact of generative AI and deepfakes on global elections. Despite widespread concerns, she noted that deepfakes did not significantly alter electoral outcomes. Instead, Hutson highlighted a more subtle effect: the erosion of public trust in online information, a phenomenon she referred to as the "liar's dividend."
"What has happened as a result of deepfakes is... people are less confident in what they're seeing online. They're not sure. The information ecosystem is a bit polluted," Hutson explained. She emphasized the need for technological solutions like content credentials and content provenance to help restore trust by verifying the authenticity of digital content.
Hutson also raised concerns about deepfakes targeting women in public life with non-consensual imagery, potentially deterring them from leadership roles. Looking ahead, she stressed the importance of mitigating harmful uses of AI, protecting vulnerable groups, and establishing appropriate regulations to advance technology in trustworthy ways.
This conversation was presented by GZERO in partnership with Microsoft at the 7th annual Paris Peace Forum. The Global Stage series convenes heads of state, business leaders, and technology experts from around the world for critical debates about the geopolitical and technological trends shaping our world.
Follow GZERO coverage of the Paris Peace Forum here: https://www.gzeromedia.com/global-stage
How the UN is combating disinformation in the age of AI
Disinformation is running rampant in today’s world. The internet, social media, and AI — combined with declining trust in major institutions — have created an ecosystem ripe for exploitation by nefarious actors aiming to spread false and hateful narratives. Meanwhile, governments worldwide are struggling to get big tech companies to take substantive steps to combat disinformation. And at the global level, the UN’s priorities are also being hit hard by these trends.
“We can't bring about and generate stability in fragile environments if populations are turning against our peacekeepers as a result of lies being spread against them online. We can't make progress on climate change if people are being led to believe first of all, that maybe it doesn't even exist, or that it's not as bad as they thought, or that it's actually too late and there's nothing that they can do about it,” Melissa Fleming, the UN's Under-Secretary-General for Global Communications, told GZERO in a conversation at the SDG Media Zone during the 79th UN General Assembly.
“The UN alone cannot tackle these problems without civil society, without people. And the people are what drives political agendas. So it's really important for us to work on our information ecosystems together,” Fleming added.
Though Fleming said that many in the UN are excited by AI's myriad potential benefits, she also emphasized the serious problems it’s already posing in terms of accelerating the spread of disinformation—particularly via deepfakes.
“We've spent a lot of time also trying to educate the public on how to spot misinformation and disinformation and how to tell if a photo is real or if it is fake. In the AI information age, that's going to become nearly impossible,” Fleming said.
“So we're calling on AI actors to really create safety by design, and don't leave it only to the users to be able to try to figure out how to navigate this. They are designing these instruments, and they can be part of the solution,” she added.
Old MacDonald had a Russian bot farm
On July 9, the US Department of Justice announced it disrupted a Russian bot farm that was actively using generative AI to spread disinformation worldwide. The department seized two domain names and probed 1,000 social media accounts on X (formerly known as Twitter) in collaboration with the FBI as well as Canadian and Dutch authorities. X voluntarily suspended the accounts, the government said.
The Kremlin-approved effort, which has been active since at least 2022, was spearheaded by an unnamed editor at RT, the Russia state-run media outlet, who created fake social media personas and posted pro-Putin and anti-Ukraine sentiments on X. It’s unclear which AI tools were used to generate the social media posts.
“Today’s actions represent a first in disrupting a Russian-sponsored Generative AI-enhanced social media bot farm,” FBI Director Christopher Wray wrote in a statement. Wray said that Russia intended to use this bot farm to undermine allies of Ukraine and “influence geopolitical narratives favorable to the Russian government.”
Russia has long tried to sow chaos online in the United States, but the Justice Department’s latest action signals that it’s ready to intercept inorganic social media activity — especially when it’s supercharged with AI.
The Disinformation Election: Will the wildfire of conspiracy theories impact the vote?
Trust in institutions is at an all-time low, and only 44% of Americans have confidence in the honesty of elections. Distrust and election-related disinformation are leaving society vulnerable to conspiracy theories.
Ian Bremmer, president of Eurasia Group and GZERO Media, notes that American democracy is in crisis largely because “one thing not in short supply this election season: conspiracy theories.”
As part of GZERO Media’s election coverage, we are tracking the impact of disinformation and conspiracy theories on democracy. To get a sense of how this election may be pulled down a dark and dangerous rabbit hole, click here for our interactive guide to conspiracy theories.
Are bots trying to undermine Donald Trump?
In an exclusive investigation into online disinformation surrounding the reaction to Donald Trump’s hush-money trial, GZERO asks whether bots are being employed to shape debates about the former president’s guilt or innocence. We investigated, with the help of Cyabra, a firm that specializes in tracking bots, to look for disinformation surrounding the online reactions to Trump’s trial. Is Trump’s trial the target of a massive online propaganda campaign – and, if so, which side is to blame?
_____________
Adult film actress Stormy Daniels testified on Tuesday against former President Donald Trump, detailing her sexual encounter with Trump in 2006 and her $130,000 hush money payment from Trump's ex-attorney Michael Cohen before the 2016 election. In the process, she shared explicit details and said she had not wanted to have sex with Trump. This led the defense team to call for a mistrial. Their claim? That the embarrassing aspects were “extraordinarily prejudicial.”
Judge Juan Merchan denied the motion – but also agreed that some of the details from Daniels were “better left unsaid.”
The trouble is, plenty is being said, inside the courtroom and in the court of public opinion – aka social media. With so many people learning about the most important trials of the century online, GZERO partnered with Cyabra to investigate how bots are influencing the dialogue surrounding the Trump trials. For a man once accused of winning the White House off the steam of Russian meddling, the results may surprise you.
Bots – surprise, surprise – are indeed rampant amid the posts about Trump’s trials online. Cyabra’s AI algorithm analyzed 7,500 posts with hashtags and phrases related to the trials and found that 17% of Trump-related tweets came from fake accounts. The team estimated that these inauthentic tweets reached a whopping 49.1 million people across social media platforms.
Ever gotten into an argument on X? Your opponent might not have been real. Cyabra found that the bots frequently comment and interact with real accounts.
The bots also frequently comment on tweets from Trump's allies in large numbers, leading X’s algorithm to amplify those tweets. Cyabra's analysis revealed that, on average, bots are behind 15% of online conversations about Trump. However, in certain instances, particularly concerning specific posts, bot activity surged to over 32%.
But what narrative do they want to spread? Well, it depends on who’s behind the bot. If you lean left, you might assume most of the bots were orchestrated by MAGA hat owners – if you lean right, you’ll be happy to learn that’s not the case.
Rather than a bot army fighting in defense of Trump, Cyabra found that 73% of the posts were negative about the former president, offering quotes like “I don’t think Trump knows how to tell the truth” and “not true to his wife, not true to the church, not true to the country, just a despicable traitor.”
Meanwhile, only 4% were positive. On the positive posts, Cyabra saw a pattern of bots framing the legal proceedings as biased and painting Trump as a political martyr. The tweets often came in the form of comments on Trump’s allies’ posts in support of the former president. For example, in a tweet from Marjorie Taylor Greene calling the trials “outrageous” and “election interference,” 32% of the comments were made by inauthentic profiles.
Many of the tweets and profiles analyzed were also indistinguishable from posts made by real people – a problem many experts fear is only going to worsen. As machine learning and artificial intelligence advance, so too will the fake accounts and attempts to shape political narratives.
Moreover, while most of the bots came from the United States – it was by no means all of them. The location of some of the bots does not exactly read like a list of usual suspects, with only three in China and zero in Russia (see map below).
Cyabra
This is just one set of data based on one trial, so there are limitations to drawing broader conclusions. But we do know, of course, that conservatives have long been accused of jumping on the bot-propaganda train to boost their political fortunes. In fact, Cyabra noted last year that pro-Trump bots were even trying to sow division amongst Republicans and hurt Trump opponents like Nikki Haley.
Still, Cyabra’s research, both then and now, shows that supporters of both the left and the right are involved in the bot game – and that, in this case, much of the bot-generated content was negative about Trump, which contradicts assumptions that his supporters largely operate bots. It’s also a stark reminder to ensure you’re dealing with humans in your next online debate.
In the meantime, check out Cyabra’s findings in full by clicking the button below.
Are US elections Safe? Chris Krebs is optimistic
The debate around the US banning TikTok is a proxy for a larger question: How safe are democracies from high-tech threats, especially from places like China and Russia?
There are genuine concerns about the integrity of elections. What are the threats out there and what can be done about it? No one understands this issue better than Chris Krebs. Krebs is best known as the former director of the US Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency.
In a high-profile showdown, Donald Trump fired Krebs in November 2020, after CISA publicly affirmed that the election was among the “most secure in history” and that the allegations of election corruption were flat-out wrong. Since then, Krebs has become the chief public policy officer at SentinelOne and cochairs the Aspen Institute’s U.S. Cybersecurity Working Group, and he remains at the forefront of the cyber threat world.
GZERO Publisher Evan Solomon spoke to him this week about what we should expect in this volatile election year.
Solomon: How would you compare the cyber threat landscape now to the election four years ago? Have the rapid advances in AI made a material difference?
Chris Krebs: The general threat environment related to elections tracks against the broader cyber threat environment. The difference here is that beyond just pure technical attacks on election systems, election infrastructure, and on campaigns themselves, we have a parallel threat of information operations, and influence operations —what we more broadly call disinformation.
This has picked up almost exponentially since 2016, when the Russians, as detailed in the Intelligence Community Assessment of January 2017, showed that you can get into the middle of domestic elections and pour kerosene on that conversation. That means it jumps into the real world, potentially even culminating in political violence like we saw on Jan. 6.
We saw the Iranians follow the lead in 2020. The intelligence community released another report in December that talked about how the Chinese attempted to influence the 2022 elections. We've seen the Russians are active too through a group we track called Doppelganger, specifically targeting the debate around the border and immigration in the US.
Solomon: When you say Doppelganger is “active,” what exactly does that mean in real terms?
Krebs: They use synthetic personas or take over existing personas that have some element of credibility and jump into the online discourse. They also use Pink Slime websites, which is basically fake media, and then get picked up through social media and move over to traditional media. They are taking existing divides and amplifying the discontent.
Solomon: Does it have a material impact on, say, election results?
Krebs: I was at an event back in 2019, and a former governor came up to me as we were talking about prepping for the 2020 election and said: “Hey, everything you just talked about sounds like opposition research, typical electioneering, and hijinks.”
And you know what? That's not totally wrong. But there is a difference.
Rather than just being normal domestic politics, now we have a foreign security service that's inserting itself in driving discourse domestically. And that's where there are tools that the intelligence services here in the US as well as our allies in the West have the ability to go in and disrupt.
They can get onto foreign networks and say, “Hey, I know that account right there. I am able to determine that the account which is pushing this narrative is controlled by the Russian security services, and we can do something with that.”
But here is the key: Once you have a social media influencer here in the US that picks up that narrative and runs with it, well, now, it's effectively fair game. It's part of the conversation, First Amendment protected.
Solomon: Let's move to the other side. What do you do about it without violating the privacy and free speech civil liberties of citizens?
Krebs: This is really the political question of the day. In fact, just last week there was a Supreme Court hearing on Murthy v. Missouri that gets to this question of government and platforms working together. (Editor’s note: The case hinges on whether the government’s efforts to combat misinformation online around elections and COVID constitute a form of censorship). Based on my read, the Supreme Court was largely being dismissive of Missouri and Louisiana's arguments in that case. But we'll see what happens.
I think the bigger issue is that there is this broader conflict, particularly with China, and it is a hot cyber war. Cyber war from their military doctrine has a technical leg and there's a psychological leg. And as we see it, there are a number of different approaches.
For example, India has outlawed and banned hundreds of Chinese origin apps, including WeChat and TikTok and a few others. The US has been much more discreet in combating Chinese technology. The recent actions by the US Congress and the House of Representatives are much more focused on getting the foreign control piece out of the conversation and requiring divestitures.
Solomon: Chris, what’s the biggest cyber threat to the elections?
Krebs: Based on my conversations with law enforcement and the national security community, the number one request that they're getting from election officials isn't on the cyber side. It isn't on the disinformation side. It's on physical threats to election workers. We're talking about doxing, we're talking about swatting, we're talking about people physically intimidating at the polls and at offices. And this is resulting in election officials resigning and quitting and not showing up.
How do we protect those real American heroes who are making sure that we get to follow through on our civic duty of voting and elections? If those election workers aren't there, it's going to be a lot harder for you and me to get out there and vote.
Solomon: What is your biggest concern about AI technology galloping ahead of regulations?
Krebs: Here in the United States, I'm not too worried about regulation getting in front of AI. When you look at the recent AI executive order out of the Biden administration, it's about transparency and even the threshold they set for compute power and operations is about four times higher than the most advanced publicly available generative AI. And even if you cross that threshold, the most you have to do is tell the government that you're building or training that model and show safety and red teaming results, which hardly seems onerous to me.
The Europeans are taking a different approach, more of a regulate first, ask questions later, which I think is going to limit some of their ability to truly be at the bleeding edge of AI.
But I'll tell you this: We are using AI and cybersecurity to a much greater effect and impact than the bad guys right now. The best they can do right now is use it for social engineering, for writing better phishing emails, for some research, and for functionality. We are not seeing credible reports of AI being used to write new innovative malware. But in the meantime, we are giving tools that are AI powered to the threat hunters that have really advanced capabilities to go find bad stuff, to improve configurations, and ultimately take the security operations piece and supercharge it.
Midjourney quiets down politics
Everything is political for GZERO, but AI image generator Midjourney would rather avoid the drama. The company has begun blocking the creation of images featuring President Joe Biden and former President Donald Trump in the run-up to the US presidential election in November.
“I don’t really care about political speech,” said Midjourney CEO David Holz in an event with users last week. “That’s not the purpose of Midjourney. It’s not that interesting to me. That said, I also don’t want to spend all of my time trying to police political speech. So we’re going to have to put our foot down on it a bit.”
Holz’s statement comes just weeks after the Center for Countering Digital Hate issued a report showing it was able to use popular AI image generators to create election disinformation in 41% of its attempts. Midjourney performed worst out of all of the tools the group tested with researchers able to generate these images 65% of the time.
Examples included images of Joe Biden sick in a hospital bed, Donald Trump in a jail cell, and a box of thrown-out ballots in a dumpster. GZERO tried to generate a simple image of Biden and Trump shaking hands and received an error message: “Sorry! Our AI moderator thinks this prompt is probably against our community standards.”
For Midjourney, it seems like they simply don’t want to be in the business of policing what political speech is acceptable and what isn’t — so they’re taking the easy way out and turning the nozzle off entirely. OpenAI’s tools have long been hesitant to wade into political waters, and stark criticism has come for Microsoft and Google for their sensitivity failures about historical accuracy and offensive imagery. Why would Midjourney take that risk?
AI election safeguards aren’t great
The British nonprofit used Midjourney, OpenAI's ChatGPT, Stability.ai's DreamStudio, and Microsoft's Image Creator for testing in February, simply tying in different text prompts related to the US elections. The group was able to bypass the tools’ protections a whopping 41% of the time.
Some of the images they created showed Donald Trump being taken away in handcuffs, Trump on a plane with alleged pedophile and human trafficker Jeffrey Epstein, and Joe Biden in a hospital bed.
Generative AI is already playing a tangible role in political campaigns, especially as voters go to the polls for national elections in 64 different countries this year. AI has been used to help a former prime minister get his message out from prison in Pakistan, to turn a hardened defense minister into a cuddly character in Indonesia, and to impersonate US President Biden in New Hampshire. Protections that fail nearly half the time just won’t cut it. With regulation lagging behind the pace of technology, AI companies have made voluntary commitments to prevent the creation and spread of election-related AI media.
“All of these tools are vulnerable to people attempting to generate images that could be used to support claims of a stolen election or could be used to discourage people from going to polling places," CCDH’s Callum Hood told the BBC. “If there is will on the part of the AI companies, they can introduce safeguards that work.”