Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Battle of the bots: Trump trial
Talk about courting attention. Former President Donald Trump’s guilty verdict in his hush money trial on 34 felony counts captured the public’s imagination – some to rejoice, others to reject – and much of the debate played out on X, formerly known as Twitter.
But, dearest gentle reader, we humans were not alone. Internet bots also immediately got to work to manipulate the online conversation. As a part of our ongoing investigation into how disinformation is affecting the 2024 election and US democracy, we partnered with Cyabra, a disinformation detection firm, to investigate how fake profiles online responded to the Trump trial.
After analyzing 22,000 pieces of trial-related content, Cyabra found that 17% came from fake accounts. While real people made up the majority of posts, 55% of the inauthentic posts were aimed at discrediting the US justice system and portraying Trump as a victim of a biased system.
Regardless of how one feels about Trump’s criminality, posts like these further endanger voters’ faith in institutions at a time when trust in them is already at an all-time low. Plummeting trust in institutions is also fueling conspiracy theories. To learn about the theories with the biggest influence on the 2024 election, check out GZERO’s new immersive project here.
Battle of the bots: AOC under attack
GZERO teamed up with Cyabra, a disinformation detection firm, to investigate how fake actors on the internet could be shaping interactions with AOC’s posts.
They found that 27% of responses to her X posts condemning US involvement in Israel’s Gaza operations and Columbia University’s use of police against protesters were from fake accounts.
The most common words used by the fake accounts were “Hamas” and “terrorist,” and their comments were usually accusing the congresswoman of sympathizing with terrorists or inciting violence. Many also compared the student protests to the Jan. 6 riots, proposing that there was a double standard to the protesters’ political agenda.
Battle of the Bots: Violence at the DNC
When students protesting the war in Gaza took over a building onColumbia’s campus 56 years to the day after it was occupied by students protesting the Vietnam War, many began drawing parallels between the two waves of student protests.
Back in 1968, student demonstrators went home for the summer, only to resurface by the thousands at the Democratic National Convention in Chicago, and, long story short, things got ugly. The gathering erupted into violence, leading to the activation of the National Guard and the arrests of hundreds of protesters.
This August, the DNC is also in Chicago, which has some wondering: Could history repeat itself? GZERO posed the question to Cyabra, an Israel-based data firm that investigates fake actors on the internet. They analyzed the response to the DNC’s X post announcing the convention to see if fake accounts were inciting protests or calling for chaos.
They found that 11% of the accounts that commented on the post were fake. While that means the majority of accounts were authentic, 72% of the fake accounts were calling for violence at the DNC, with a large number rallying people to “go on the streets.”
Last weekend,student protestors retook Columbia’s lawns, setting up another Gaza Solidarity Encampment during the alumni weekend – proof that the student movement is not taking a summer vacation. So calls for chaos at the convention will likely continue to grow from both real and fake actors online as the event nears.
Bots battle Bibi
X has become a critical means for politicians and the public to broadcast their views on current events, often triggering controversy, trolling, and bitter battles in the new political arena: the comments section. Trouble is, it’s not just people posting. Social media bots — programs that automate interactions and post content on social media in ways that mimic human behavior — are also flooding the comments section, which means you may be responding to fake accounts, not humans.
We partnered with Cyabra, an Israel-based data firm that investigates fake actors on the internet, and found that bots flocked to Bibi’s post in droves. They made up over 43% of all replies, and of the bots spreading negative sentiment about the post, 19% used keywords like “genocide,” “kids,” and “children.”
The investigation also found that while real accounts outnumbered fake ones, the bots were far more active, with many commenting multiple times. Across the board, they found that up to 31% of comments responding to posts from key political figures are fueled by fake accounts.
Are bots trying to undermine Donald Trump?
In an exclusive investigation into online disinformation surrounding the reaction to Donald Trump’s hush-money trial, GZERO asks whether bots are being employed to shape debates about the former president’s guilt or innocence. We investigated, with the help of Cyabra, a firm that specializes in tracking bots, to look for disinformation surrounding the online reactions to Trump’s trial. Is Trump’s trial the target of a massive online propaganda campaign – and, if so, which side is to blame?
_____________
Adult film actress Stormy Daniels testified on Tuesday against former President Donald Trump, detailing her sexual encounter with Trump in 2006 and her $130,000 hush money payment from Trump's ex-attorney Michael Cohen before the 2016 election. In the process, she shared explicit details and said she had not wanted to have sex with Trump. This led the defense team to call for a mistrial. Their claim? That the embarrassing aspects were “extraordinarily prejudicial.”
Judge Juan Merchan denied the motion – but also agreed that some of the details from Daniels were “better left unsaid.”
The trouble is, plenty is being said, inside the courtroom and in the court of public opinion – aka social media. With so many people learning about the most important trials of the century online, GZERO partnered with Cyabra to investigate how bots are influencing the dialogue surrounding the Trump trials. For a man once accused of winning the White House off the steam of Russian meddling, the results may surprise you.
Bots – surprise, surprise – are indeed rampant amid the posts about Trump’s trials online. Cyabra’s AI algorithm analyzed 7,500 posts with hashtags and phrases related to the trials and found that 17% of Trump-related tweets came from fake accounts. The team estimated that these inauthentic tweets reached a whopping 49.1 million people across social media platforms.
Ever gotten into an argument on X? Your opponent might not have been real. Cyabra found that the bots frequently comment and interact with real accounts.
The bots also frequently comment on tweets from Trump's allies in large numbers, leading X’s algorithm to amplify those tweets. Cyabra's analysis revealed that, on average, bots are behind 15% of online conversations about Trump. However, in certain instances, particularly concerning specific posts, bot activity surged to over 32%.
But what narrative do they want to spread? Well, it depends on who’s behind the bot. If you lean left, you might assume most of the bots were orchestrated by MAGA hat owners – if you lean right, you’ll be happy to learn that’s not the case.
Rather than a bot army fighting in defense of Trump, Cyabra found that 73% of the posts were negative about the former president, offering quotes like “I don’t think Trump knows how to tell the truth” and “not true to his wife, not true to the church, not true to the country, just a despicable traitor.”
Meanwhile, only 4% were positive. On the positive posts, Cyabra saw a pattern of bots framing the legal proceedings as biased and painting Trump as a political martyr. The tweets often came in the form of comments on Trump’s allies’ posts in support of the former president. For example, in a tweet from Marjorie Taylor Greene calling the trials “outrageous” and “election interference,” 32% of the comments were made by inauthentic profiles.
Many of the tweets and profiles analyzed were also indistinguishable from posts made by real people – a problem many experts fear is only going to worsen. As machine learning and artificial intelligence advance, so too will the fake accounts and attempts to shape political narratives.
Moreover, while most of the bots came from the United States – it was by no means all of them. The location of some of the bots does not exactly read like a list of usual suspects, with only three in China and zero in Russia (see map below).
Cyabra
This is just one set of data based on one trial, so there are limitations to drawing broader conclusions. But we do know, of course, that conservatives have long been accused of jumping on the bot-propaganda train to boost their political fortunes. In fact, Cyabra noted last year that pro-Trump bots were even trying to sow division amongst Republicans and hurt Trump opponents like Nikki Haley.
Still, Cyabra’s research, both then and now, shows that supporters of both the left and the right are involved in the bot game – and that, in this case, much of the bot-generated content was negative about Trump, which contradicts assumptions that his supporters largely operate bots. It’s also a stark reminder to ensure you’re dealing with humans in your next online debate.
In the meantime, check out Cyabra’s findings in full by clicking the button below.
Tracking anti-Navalny bot armies
In an exclusive investigation into online disinformation surrounding online reaction to Alexei Navalny's death, GZERO asks whether it is possible to track the birth of a bot army. Was Navalny's tragic death accompanied by a massive online propaganda campaign? We investigated, with the help of a company called Cyabra.
Alexei Navalny knew he was a dead man the moment he returned to Moscow in January 2021. Vladimir Putin had already tried to kill him with the nerve agent Novichok, and he was sent to Germany for treatment. The poison is one of Putin’s signatures, like pushing opponents out of windows or shooting them in the street. Navalny knew Putin would try again.
Still, he came home.
“If your beliefs are worth something,” Navalny wrote on Facebook, “you must be willing to stand up for them. And if necessary, make some sacrifices.”
He made the ultimate sacrifice on Feb. 16, when Russian authorities announced, with Arctic banality, that he had “died” at the IK-3 penal colony more than 1,200 miles north of Moscow. A frozen gulag. “Convict Navalny A.A. felt unwell after a walk, almost immediately losing consciousness,” they announced as if quoting a passage from Koestler’s “Darkness at Noon.” Later, deploying the pitch-black doublespeak of all dictators, they decided to call it, “sudden death syndrome.”
Worth noting: Navalny was filmed the day before, looking well. There is no body for his wife and two kids to see. No autopsy.
As we wrote this morning, Putin is winning on all fronts. Sensing NATO support for the war in Ukraine is wavering – over to you, US Congress – Putin is acting with confident impunity. His army is gaining ground in Ukraine. He scored a propaganda coup when he toyed with dictator-fanboy Tucker Carlson during his two-hour PR session thinly camouflaged as an “interview.” And just days after Navalny was declared dead, the Russian pilot Maksim Kuzminov, who defected to Ukraine with his helicopter last August, was gunned down in Spain.
And then, of course, there is the disinformation war, another Putin battleground. Navalny’s death got me wondering if there would be an orchestrated disinformation campaign around the event, and if so, whether there was any way to track it? Would there be, say, an online release of shock bot troops to combat Western condemnation of Navalny’s death and blunt the blowback?
It turns out there was.
To investigate, GZERO asked the “social threat information company” Cyabra, which specializes in tracking bots, to look for disinformation surrounding the online reactions to the news about Navalny. The Israeli company says its job is to uncover “threats” on social platforms. It has built AI-driven software to track “attacks such as impersonation, data leakage, and online executive perils as they occur.”
Cyabra’s team focused on the tweets President Joe Bidenand Prime Minister Justin Trudeau posted condemning Navalny’s death. Their software analyzed the number of bots that targeted these official accounts. And what they found was fascinating.
According to Cyabra, “29% of the Twitter profiles interacting with Biden’s post about Navalny on X were identified as inauthentic.” For Trudeau, the number was 25%.
Courtesy of Cyabra
So, according to Cyabra, more than a quarter of the reaction you saw on X related to Navalny’s death and these two leaders’ reactions came from bots, not humans. In other words, a bullshit campaign of misinformation.
This finding raises a lot of questions. What’s the baseline of corruption to get a good sense of comparison? For example, is 27% bot traffic on Biden’s tweet about Navalny’s death a lot, or is everything on social media flooded with the same amount of crap? How does Cyabra's team actually track bots, and how accurate is their data? Are they missing bots that are well-disguised, or, on the other side, are some humans being labeled as “inauthentic”? In short, what does this really tell us?
In the year of elections, with multiple wars festering and AI galloping ahead of regulation, the battle against disinformation and bots is more consequential than ever. The bot armies of the night are marching. We need to find a torch to see where they are and if there are any tools that can help us separate fact from fiction.
Tracking bot armies is a job that often happens in the shadows, and it comes with a lot of challenges. Can this be done without violating people’s privacy? How hard is this to combat? I spoke with the CEO of Cyabra, Dan Brahmy, to get his view.
Solomon: When Cyabra tracked the reactions to the tweets from President Joe Biden and Prime Minister Trudeau about the “death” of Navalny, you found more than 25% of the accounts were inauthentic. What does this tell us about social media and what people can actually trust is real?
Brahmy: From elections to sporting events to other significant international headline events, social media is often the destination for millions of people to follow the news and share their opinion. Consequently, it is also the venue of choice for malicious actors to manipulate the narrative.
This was also the case when Cyabra looked into President Biden and Prime Minister Trudeau’s X post directly blaming Putin for Navalny’s death. These posts turned out to be the ideal playing ground for narrative-manipulating bots. Inauthentic accounts on a large scale attacked Biden and Trudeau and blamed them for their foreign and domestic policies while attempting to divert attention from Putin and the negative narrative surrounding him.
The high number of fake accounts detected by Cyabra, together with the speed at which those accounts engaged in the conversation to divert and distract following the announcement of Navalny’s death, shows the capabilities of malicious actors and their intentions to conduct sophisticated influence operations.
Solomon: Can you tell where these are from and who is doing it?
Brahmy: Cyabra monitors for publicly available information on social media and does not track IP addresses or any private information. The publicly shared location of the account is collected by Cyabra. When analyzing the Navalny conversation, Cyabra saw that the majority of the accounts claimed themselves as coming from the US.
Solomon: There is always the benchmark question: How much “bot” traffic or inauthentic traffic do you expect at any time, for any online event? Put the numbers we see here for Trudeau and Biden in perspective.
Brahmy: The average percentage of fake accounts participating in an everyday conversation online typically varies between 4 and 8%. Cyabra’s discovery of 25-29% fake accounts related to this conversation is alarming, significant, and should give us cause for concern.
Solomon: Ok, then there is the accuracy question. How do you actually identify a bot and how do you know, given the sophistication of AI and new bots, that you are not missing a lot of them? Is it easier to find “obvious bots”— i.e., something that tweets every two minutes 24 hours a day, then say, a series of bots that look and act very human?
Brahmy: Using advanced AI and machine learning, Cyabra analyzes a profile’s activity and interactions to determine if it demonstrates non-human behaviors. Cyabra’s proprietary algorithm consists of over 500 behavioral parameters. Some parameters are more intuitive, like the use of multiple languages, while others require in-depth expertise and advanced machine learning. Cyabra’s technology works at scale and in almost real-time.
Solomon: There is so much disinformation anyway – actual people who lie, mislead, falsify, scam – how much does this matter?
Brahmy: The creation and activities of fake accounts on social media (whether it be a bot, sock puppet, troll, or otherwise) should be treated with the utmost seriousness. Fake accounts are almost exclusively created for nefarious purposes. By identifying inauthentic profiles and then analyzing their behaviors and the false narratives they are spreading, we can understand the intentions of malicious actors and remedy them as a society.
While we all understand that the challenge of disinformation is pervasive and a threat to society, being able to conduct the equivalent of an online CT scan reveals the areas that most urgently need our attention.
Solomon: Why does it matter in a big election year?
Brahmy: More than 4 billion people globally are eligible to vote in 2024, with over 50 countries holding elections. That’s 40% of the world’s population. Particularly during an election year, tracking disinformation is important – from protecting the democratic process, ensuring informed decision-making, preventing foreign interference, and promoting transparency, to protecting national security. By tracking and educating the public on the prevalence of inauthentic accounts, we slowly move closer to creating a digital environment that fosters informed, constructive, and authentic discourse.
You can check out part of the Cybara report here.
- Understanding Navalny’s legacy inside Russia ›
- Navalny’s widow continues his fight for freedom ›
- “A film is a weapon on time delay” — an interview with “Navalny” director Daniel Roher ›
- Navalny's death is a huge loss for democracy - NATO's Mircea Geona ›
- Alexei Navalny's death: A deep tragedy for Russia ›
- Navalny's death is a message to the West ›
- Navalny’s death: Five things to know ›