Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Hard Numbers: AI-generated bank runs, Europe wants to supercharge innovation, Do you trust AI?, Dell’s big deal, South Korea’s GPU hoard
51.6 billion: Europe will invest $51.6 billion in artificial intelligence, European Commission President Ursula von der Leyen said last week. That’ll add to the $157 billion already committed by Europe’s private sector under the AI Champions Initiative launched at the AI Action Summit in Paris last week. The goal is to “supercharge” innovation across the continent, she said.
32: Just 32% of Americans say they trust artificial intelligence, according to the annual Edelman Trust Barometer published by the public relations firm Edelman on Thursday. By contrast, 72% of people in China said they trust AI. Meanwhile, only 44% of Americans said they are comfortable with businesses using AI.
5 billion: Dell shares rose 4% on Friday after press reports indicated it was closing a $5 billion deal to sell AI servers to Elon Musk’s xAI. Dell stock has soared 39% over the past year on increased demand for AI.
10,000: South Korea said Monday it will buy 10,000 graphics processors for its national computing center. The country is one of the few that are unrestricted from buying these chips from American companies. It’s unclear who South Korea will buy from, but Nvidia dominates the market, followed far behind by AMD and Intel.
Open AI CEO Sam Altman, left, and SoftBank Group CEO Masayoshi Son attend a marketing event in Tokyo, Japan, on Feb. 3, 2025.
Hard Numbers: OpenAI monster funding round, Meta’s glasses sales, Teens fall for AI too, The Beatles win at the Grammys, Anthropic’s move to reduce jailbreaking
1 million: Meta said that it sold 1 million units of its AI-enabled Ray-Ban smart glasses in 2024. It’s the first time the company has revealed sales numbers for its glasses, which retail between $299-$379.
35: Even young people get tricked by AI. A new report from Common Sense Media, a nonprofit advocacy group, found that 35% of teenagers aged 13–18 self-report being deceived by fake content online, including AI-generated media.
8: The Beatles won their eighth competitive Grammy Award on Sunday for the AI-assisted song “Now and Then.” A production team used AI to turn an unreleased John Lennon demo from the late 1970s into a polished track.
95: Anthropic announced a new “constitutional classifiers” system that in a test was 95% effective in blocking users from eliciting harmful content from its Claude models — up from 14% without the classifiers. Similar to the “prompt shields” Microsoft introduced last year, this is the latest effort to reduce “jailbreaking,” where users coerce AI models into ignoring their own content rules.Deputy Prime Minister, Minister of Digital Affairs Krzysztof Gawkowski speaks during a press conference.
Poland sounds the Russia cyber alarm
Georgia, a former Soviet republic that’s now independent, has facedpolitical crisis and social unrest over claims that Russia is manipulating its politics. Romania was forced to void an election result andrerun the vote late last year on similar charges of Russian meddling.
The charge isn’t new. Ukraine’sOrange Revolution (2004-05) began in response to an election result that protesters asserted had been determined by Vladimir Putin. And the charges of Russian interference in the 2016 US presidential race made headlines, though there was no evidence the Russians were successful enough to determine the outcome.
Today, Europeans are particularly on edge, because new elections are coming in both Germany and the Czech Republic. Russia has suffered more than700,000 casualties in Ukraine, according to US officials. Its ability to wage conventional war has sustained enormous damage. All the more reason, European officials fear, for Russia to use cyber strikes and sabotage attacks to pressure their governments to cut their backing for Ukraine.Rebuilding post-election trust in the age of AI
In a GZERO Global Stage discussion at the 7th annual Paris Peace Forum, Teresa Hutson, Corporate Vice President at Microsoft, reflected on the anticipated impact of generative AI and deepfakes on global elections. Despite widespread concerns, she noted that deepfakes did not significantly alter electoral outcomes. Instead, Hutson highlighted a more subtle effect: the erosion of public trust in online information, a phenomenon she referred to as the "liar's dividend."
"What has happened as a result of deepfakes is... people are less confident in what they're seeing online. They're not sure. The information ecosystem is a bit polluted," Hutson explained. She emphasized the need for technological solutions like content credentials and content provenance to help restore trust by verifying the authenticity of digital content.
Hutson also raised concerns about deepfakes targeting women in public life with non-consensual imagery, potentially deterring them from leadership roles. Looking ahead, she stressed the importance of mitigating harmful uses of AI, protecting vulnerable groups, and establishing appropriate regulations to advance technology in trustworthy ways.
This conversation was presented by GZERO in partnership with Microsoft at the 7th annual Paris Peace Forum. The Global Stage series convenes heads of state, business leaders, and technology experts from around the world for critical debates about the geopolitical and technological trends shaping our world.
Follow GZERO coverage of the Paris Peace Forum here: https://www.gzeromedia.com/global-stage
How the UN is combating disinformation in the age of AI
Disinformation is running rampant in today’s world. The internet, social media, and AI — combined with declining trust in major institutions — have created an ecosystem ripe for exploitation by nefarious actors aiming to spread false and hateful narratives. Meanwhile, governments worldwide are struggling to get big tech companies to take substantive steps to combat disinformation. And at the global level, the UN’s priorities are also being hit hard by these trends.
“We can't bring about and generate stability in fragile environments if populations are turning against our peacekeepers as a result of lies being spread against them online. We can't make progress on climate change if people are being led to believe first of all, that maybe it doesn't even exist, or that it's not as bad as they thought, or that it's actually too late and there's nothing that they can do about it,” Melissa Fleming, the UN's Under-Secretary-General for Global Communications, told GZERO in a conversation at the SDG Media Zone during the 79th UN General Assembly.
“The UN alone cannot tackle these problems without civil society, without people. And the people are what drives political agendas. So it's really important for us to work on our information ecosystems together,” Fleming added.
Though Fleming said that many in the UN are excited by AI's myriad potential benefits, she also emphasized the serious problems it’s already posing in terms of accelerating the spread of disinformation—particularly via deepfakes.
“We've spent a lot of time also trying to educate the public on how to spot misinformation and disinformation and how to tell if a photo is real or if it is fake. In the AI information age, that's going to become nearly impossible,” Fleming said.
“So we're calling on AI actors to really create safety by design, and don't leave it only to the users to be able to try to figure out how to navigate this. They are designing these instruments, and they can be part of the solution,” she added.
Hacked displayed on a mobile with binary code with in the background Anonymous mask. On 9 August 2023 in Brussels, Belgium.
Old MacDonald had a Russian bot farm
On July 9, the US Department of Justice announced it disrupted a Russian bot farm that was actively using generative AI to spread disinformation worldwide. The department seized two domain names and probed 1,000 social media accounts on X (formerly known as Twitter) in collaboration with the FBI as well as Canadian and Dutch authorities. X voluntarily suspended the accounts, the government said.
The Kremlin-approved effort, which has been active since at least 2022, was spearheaded by an unnamed editor at RT, the Russia state-run media outlet, who created fake social media personas and posted pro-Putin and anti-Ukraine sentiments on X. It’s unclear which AI tools were used to generate the social media posts.
“Today’s actions represent a first in disrupting a Russian-sponsored Generative AI-enhanced social media bot farm,” FBI Director Christopher Wray wrote in a statement. Wray said that Russia intended to use this bot farm to undermine allies of Ukraine and “influence geopolitical narratives favorable to the Russian government.”
Russia has long tried to sow chaos online in the United States, but the Justice Department’s latest action signals that it’s ready to intercept inorganic social media activity — especially when it’s supercharged with AI.
The Disinformation Election: Will the wildfire of conspiracy theories impact the vote?
Trust in institutions is at an all-time low, and only 44% of Americans have confidence in the honesty of elections. Distrust and election-related disinformation are leaving society vulnerable to conspiracy theories.
Ian Bremmer, president of Eurasia Group and GZERO Media, notes that American democracy is in crisis largely because “one thing not in short supply this election season: conspiracy theories.”
As part of GZERO Media’s election coverage, we are tracking the impact of disinformation and conspiracy theories on democracy. To get a sense of how this election may be pulled down a dark and dangerous rabbit hole, click here for our interactive guide to conspiracy theories.
Are bots trying to undermine Donald Trump?
In an exclusive investigation into online disinformation surrounding the reaction to Donald Trump’s hush-money trial, GZERO asks whether bots are being employed to shape debates about the former president’s guilt or innocence. We investigated, with the help of Cyabra, a firm that specializes in tracking bots, to look for disinformation surrounding the online reactions to Trump’s trial. Is Trump’s trial the target of a massive online propaganda campaign – and, if so, which side is to blame?
_____________
Adult film actress Stormy Daniels testified on Tuesday against former President Donald Trump, detailing her sexual encounter with Trump in 2006 and her $130,000 hush money payment from Trump's ex-attorney Michael Cohen before the 2016 election. In the process, she shared explicit details and said she had not wanted to have sex with Trump. This led the defense team to call for a mistrial. Their claim? That the embarrassing aspects were “extraordinarily prejudicial.”
Judge Juan Merchan denied the motion – but also agreed that some of the details from Daniels were “better left unsaid.”
The trouble is, plenty is being said, inside the courtroom and in the court of public opinion – aka social media. With so many people learning about the most important trials of the century online, GZERO partnered with Cyabra to investigate how bots are influencing the dialogue surrounding the Trump trials. For a man once accused of winning the White House off the steam of Russian meddling, the results may surprise you.
Bots – surprise, surprise – are indeed rampant amid the posts about Trump’s trials online. Cyabra’s AI algorithm analyzed 7,500 posts with hashtags and phrases related to the trials and found that 17% of Trump-related tweets came from fake accounts. The team estimated that these inauthentic tweets reached a whopping 49.1 million people across social media platforms.
Ever gotten into an argument on X? Your opponent might not have been real. Cyabra found that the bots frequently comment and interact with real accounts.
The bots also frequently comment on tweets from Trump's allies in large numbers, leading X’s algorithm to amplify those tweets. Cyabra's analysis revealed that, on average, bots are behind 15% of online conversations about Trump. However, in certain instances, particularly concerning specific posts, bot activity surged to over 32%.
But what narrative do they want to spread? Well, it depends on who’s behind the bot. If you lean left, you might assume most of the bots were orchestrated by MAGA hat owners – if you lean right, you’ll be happy to learn that’s not the case.
Rather than a bot army fighting in defense of Trump, Cyabra found that 73% of the posts were negative about the former president, offering quotes like “I don’t think Trump knows how to tell the truth” and “not true to his wife, not true to the church, not true to the country, just a despicable traitor.”
Meanwhile, only 4% were positive. On the positive posts, Cyabra saw a pattern of bots framing the legal proceedings as biased and painting Trump as a political martyr. The tweets often came in the form of comments on Trump’s allies’ posts in support of the former president. For example, in a tweet from Marjorie Taylor Greene calling the trials “outrageous” and “election interference,” 32% of the comments were made by inauthentic profiles.
Many of the tweets and profiles analyzed were also indistinguishable from posts made by real people – a problem many experts fear is only going to worsen. As machine learning and artificial intelligence advance, so too will the fake accounts and attempts to shape political narratives.
Moreover, while most of the bots came from the United States – it was by no means all of them. The location of some of the bots does not exactly read like a list of usual suspects, with only three in China and zero in Russia (see map below).
Cyabra
This is just one set of data based on one trial, so there are limitations to drawing broader conclusions. But we do know, of course, that conservatives have long been accused of jumping on the bot-propaganda train to boost their political fortunes. In fact, Cyabra noted last year that pro-Trump bots were even trying to sow division amongst Republicans and hurt Trump opponents like Nikki Haley.
Still, Cyabra’s research, both then and now, shows that supporters of both the left and the right are involved in the bot game – and that, in this case, much of the bot-generated content was negative about Trump, which contradicts assumptions that his supporters largely operate bots. It’s also a stark reminder to ensure you’re dealing with humans in your next online debate.
In the meantime, check out Cyabra’s findings in full by clicking the button below.