Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Hard Numbers: Seeking employees with AI skills, Cursor’s cash, Kremlin lies infiltrate chabots, Singapore’s aging population, VC’s spending spree, OpenAI ❤️s CoreWeave
25: Nearly 25% of all US technology jobs posted so far this year have sought employees with artificial intelligence-related skills. That number was higher for IT jobs, of which 36% in January requested comprehension of AI. It seems white-collar employees will need some proficiency with AI tools in the years ahead.
10 billion: Anysphere, the startup behind the AI coding assistant Cursor, is in talks to raise money at a $10 billion valuation, according to a report in Bloomberg on Friday. Talks of a new funding round come only three months after it last raised capital in December 2024 — $100 million based on a $2.5 billion valuation. AI assistants like Cursor, Devin, and Windsurf have become popular ways for software developers to code in recent months
3.6 million: The misinformation monitoring company NewsGuard found 3.6 million articles from 2024 featuring Russian propaganda in the training sets of 10 leading AI chatbots, including OpenAI’s ChatGPT, Google’s Gemini, and xAI’s Grok, plus Perplexity’s AI search engine. A Kremlin network called Pravda, it says, has deliberately tried to flood search results and web crawlers with “pro-Kremlin falsehoods,” many of which are about Ukraine.
25: By 2030, 25% of Singaporeans will be 65 or older. That’s up from just 10% in 2010, a massive demographic shift for the small but bustling Asian country. Singapore’s government and companies are deploying AI solutions, including in-home fall-detection systems and patient-monitoring systems in hospitals. Without some technological assistance, the country will need to hire an additional 6,000 nurses and elder care workers annually to meet its aging population’s needs.
30 billion: Investment firms have poured $30 billion into US startups this quarter, according to new data from Pitchbook data reported by the Financial Times on Sunday. Another $50 billion is reportedly on the way with Anduril, OpenAI, and other companies raising money. This could be the biggest year for US venture capital since 2021 when it spent $358 billion.
12 billion: OpenAI has signed a five-year $12 billion contract with the cloud computing startup CoreWeave. The ChatGPT maker will take a $350 million equity stake in CoreWeave, which specializes in cloud-based processors for AI companies, ahead of its planned IPO, which is expected in the coming weeks.
Hard Numbers: AI-generated bank runs, Europe wants to supercharge innovation, Do you trust AI?, Dell’s big deal, South Korea’s GPU hoard
51.6 billion: Europe will invest $51.6 billion in artificial intelligence, European Commission President Ursula von der Leyen said last week. That’ll add to the $157 billion already committed by Europe’s private sector under the AI Champions Initiative launched at the AI Action Summit in Paris last week. The goal is to “supercharge” innovation across the continent, she said.
32: Just 32% of Americans say they trust artificial intelligence, according to the annual Edelman Trust Barometer published by the public relations firm Edelman on Thursday. By contrast, 72% of people in China said they trust AI. Meanwhile, only 44% of Americans said they are comfortable with businesses using AI.
5 billion: Dell shares rose 4% on Friday after press reports indicated it was closing a $5 billion deal to sell AI servers to Elon Musk’s xAI. Dell stock has soared 39% over the past year on increased demand for AI.
10,000: South Korea said Monday it will buy 10,000 graphics processors for its national computing center. The country is one of the few that are unrestricted from buying these chips from American companies. It’s unclear who South Korea will buy from, but Nvidia dominates the market, followed far behind by AMD and Intel.
Deputy Prime Minister, Minister of Digital Affairs Krzysztof Gawkowski speaks during a press conference.
Poland sounds the Russia cyber alarm
Georgia, a former Soviet republic that’s now independent, has facedpolitical crisis and social unrest over claims that Russia is manipulating its politics. Romania was forced to void an election result andrerun the vote late last year on similar charges of Russian meddling.
The charge isn’t new. Ukraine’sOrange Revolution (2004-05) began in response to an election result that protesters asserted had been determined by Vladimir Putin. And the charges of Russian interference in the 2016 US presidential race made headlines, though there was no evidence the Russians were successful enough to determine the outcome.
Today, Europeans are particularly on edge, because new elections are coming in both Germany and the Czech Republic. Russia has suffered more than700,000 casualties in Ukraine, according to US officials. Its ability to wage conventional war has sustained enormous damage. All the more reason, European officials fear, for Russia to use cyber strikes and sabotage attacks to pressure their governments to cut their backing for Ukraine.How the UN is combating disinformation in the age of AI
Disinformation is running rampant in today’s world. The internet, social media, and AI — combined with declining trust in major institutions — have created an ecosystem ripe for exploitation by nefarious actors aiming to spread false and hateful narratives. Meanwhile, governments worldwide are struggling to get big tech companies to take substantive steps to combat disinformation. And at the global level, the UN’s priorities are also being hit hard by these trends.
“We can't bring about and generate stability in fragile environments if populations are turning against our peacekeepers as a result of lies being spread against them online. We can't make progress on climate change if people are being led to believe first of all, that maybe it doesn't even exist, or that it's not as bad as they thought, or that it's actually too late and there's nothing that they can do about it,” Melissa Fleming, the UN's Under-Secretary-General for Global Communications, told GZERO in a conversation at the SDG Media Zone during the 79th UN General Assembly.
“The UN alone cannot tackle these problems without civil society, without people. And the people are what drives political agendas. So it's really important for us to work on our information ecosystems together,” Fleming added.
Though Fleming said that many in the UN are excited by AI's myriad potential benefits, she also emphasized the serious problems it’s already posing in terms of accelerating the spread of disinformation—particularly via deepfakes.
“We've spent a lot of time also trying to educate the public on how to spot misinformation and disinformation and how to tell if a photo is real or if it is fake. In the AI information age, that's going to become nearly impossible,” Fleming said.
“So we're calling on AI actors to really create safety by design, and don't leave it only to the users to be able to try to figure out how to navigate this. They are designing these instruments, and they can be part of the solution,” she added.
Hacked displayed on a mobile with binary code with in the background Anonymous mask. On 9 August 2023 in Brussels, Belgium.
Old MacDonald had a Russian bot farm
On July 9, the US Department of Justice announced it disrupted a Russian bot farm that was actively using generative AI to spread disinformation worldwide. The department seized two domain names and probed 1,000 social media accounts on X (formerly known as Twitter) in collaboration with the FBI as well as Canadian and Dutch authorities. X voluntarily suspended the accounts, the government said.
The Kremlin-approved effort, which has been active since at least 2022, was spearheaded by an unnamed editor at RT, the Russia state-run media outlet, who created fake social media personas and posted pro-Putin and anti-Ukraine sentiments on X. It’s unclear which AI tools were used to generate the social media posts.
“Today’s actions represent a first in disrupting a Russian-sponsored Generative AI-enhanced social media bot farm,” FBI Director Christopher Wray wrote in a statement. Wray said that Russia intended to use this bot farm to undermine allies of Ukraine and “influence geopolitical narratives favorable to the Russian government.”
Russia has long tried to sow chaos online in the United States, but the Justice Department’s latest action signals that it’s ready to intercept inorganic social media activity — especially when it’s supercharged with AI.
Cyabra data of trump trial
Battle of the bots: Trump trial
Talk about courting attention. Former President Donald Trump’s guilty verdict in his hush money trial on 34 felony counts captured the public’s imagination – some to rejoice, others to reject – and much of the debate played out on X, formerly known as Twitter.
But, dearest gentle reader, we humans were not alone. Internet bots also immediately got to work to manipulate the online conversation. As a part of our ongoing investigation into how disinformation is affecting the 2024 election and US democracy, we partnered with Cyabra, a disinformation detection firm, to investigate how fake profiles online responded to the Trump trial.
After analyzing 22,000 pieces of trial-related content, Cyabra found that 17% came from fake accounts. While real people made up the majority of posts, 55% of the inauthentic posts were aimed at discrediting the US justice system and portraying Trump as a victim of a biased system.
Regardless of how one feels about Trump’s criminality, posts like these further endanger voters’ faith in institutions at a time when trust in them is already at an all-time low. Plummeting trust in institutions is also fueling conspiracy theories. To learn about the theories with the biggest influence on the 2024 election, check out GZERO’s new immersive project here.
Graph of real and fake account activity on AOC's X account.
Battle of the bots: AOC under attack
GZERO teamed up with Cyabra, a disinformation detection firm, to investigate how fake actors on the internet could be shaping interactions with AOC’s posts.
They found that 27% of responses to her X posts condemning US involvement in Israel’s Gaza operations and Columbia University’s use of police against protesters were from fake accounts.
The most common words used by the fake accounts were “Hamas” and “terrorist,” and their comments were usually accusing the congresswoman of sympathizing with terrorists or inciting violence. Many also compared the student protests to the Jan. 6 riots, proposing that there was a double standard to the protesters’ political agenda.
A man views a computer screen displaying the AI-crafted speech of former Prime Minister Imran Khan, to call for votes ahead of the general elections in Karachi, Pakistan, in early February 2024.
AI election safeguards aren’t great
The British nonprofit used Midjourney, OpenAI's ChatGPT, Stability.ai's DreamStudio, and Microsoft's Image Creator for testing in February, simply tying in different text prompts related to the US elections. The group was able to bypass the tools’ protections a whopping 41% of the time.
Some of the images they created showed Donald Trump being taken away in handcuffs, Trump on a plane with alleged pedophile and human trafficker Jeffrey Epstein, and Joe Biden in a hospital bed.
Generative AI is already playing a tangible role in political campaigns, especially as voters go to the polls for national elections in 64 different countries this year. AI has been used to help a former prime minister get his message out from prison in Pakistan, to turn a hardened defense minister into a cuddly character in Indonesia, and to impersonate US President Biden in New Hampshire. Protections that fail nearly half the time just won’t cut it. With regulation lagging behind the pace of technology, AI companies have made voluntary commitments to prevent the creation and spread of election-related AI media.
“All of these tools are vulnerable to people attempting to generate images that could be used to support claims of a stolen election or could be used to discourage people from going to polling places," CCDH’s Callum Hood told the BBC. “If there is will on the part of the AI companies, they can introduce safeguards that work.”