Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
In Ukraine’s AI-enabled war against Russia, humans still call the shots
Kateryna Bondar wants you to know that “killer robots” aren’t deployed on the battlefield in Ukraine — at least not yet. Bondar, a fellow with the Wadhwani AI Center at the Center for Strategic and International Studies, is a former advisor to the government of Ukraine, where she worked on defense and innovation, among other things.
In a new report for CSIS, Bondar seeks to dispel myths about the AI-enabled battlefield in Ukraine’s war against Russia, separating on-the-ground realities from visions of science fiction. Bondar spoke to GZERO’s Scott Nover about Ukraine’s capabilities, its reliance on foreign powers, ethical considerations about autonomous systems, and what’s next for AI in warfare.
This interview has been edited for clarity and length.
Scott Nover: Why did you want to write this report?
Kateryna Bondar: I worked in the Ukrainian Ministry of Defense, and have a lot of connections with the Ukrainian military, so I’m pretty familiar with what is happening there. And when I discuss technology and what is happening on the front lines with people there, they’re like, “Come on, we’re so far from real killer robots and AI wars. And when we read all those articles, it’s good that people think that Ukraine is so advanced, but the reality is not like that.” So the actual goal of this report is to objectively evaluate the state of AI in the war. Full autonomy is really far from actual deployment on the battlefield and the front lines. There is autonomy there, but it’s very partial with separate functions like autonomous navigation and automatic target recognition. These pieces of autonomy exist and they’re deployed on the front lines, but they are not fully autonomous systems.
How does Ukraine stack up against Russia technologically?
Ukraine is still more advanced. Before the war, there were a lot of Ukrainian engineers outsourcing for US companies. The Ukrainian talent pool is bigger. The Ukrainian patriotic movement was way bigger than in Russia, which motivated a lot of software engineers to join the army. When I talk to people who track Russian technology development — we both agree that Ukraine is still leading. Of course, it’s a constant race, but, for now, Ukraine is leading in terms of software development. And also what is important is that Ukraine — I hope and I think — finally realized this competitive advantage and they are really pushing on software development and deployment through procurement of AI-enabled drones. When I talk to the Ukrainian military and specifically Ukrainian Unmanned System Forces, a separate branch that they created, they say that they currently conduct about 80% of their strikes with drones, which I think is an impressive number. Drones can replace conventional weapon systems — but not completely. Of course, they need artillery, but it shows that it’s possible, and I think this is something really innovative and impressive, what’s happening on the front line in Ukraine.
You mentioned the West. With tensions bubbling up between Ukraine and the Trump administration, I’m wondering: How self-sufficient is Ukraine?
Ukraine is capable of producing its own drones right now — the supply chain is established. It’s a bit more expensive than buying components from abroad, especially from China, but Ukraine had to deal with this even before the situation with the United States. They were mostly using Chinese components, and China put export control limits on selling components to Ukrainians. That was the main motivation why Ukraine started creating its own supply chain to be able to build its own drones. The only components that Ukraine cannot produce right now by itself are chips and electronics. China is the best for those because no one can compete with Chinese prices, unfortunately. So Silicon Valley and US producers are not very competitive here. But Ukrainians are getting Chinese components and US components, basically anything that they can get and that is cost-efficient. So yeah, Ukraine is moving towards being self-sufficient, but in terms of chips and electronics, they still rely on external components.
When we talk about AI and chips, we’re usually talking about expensive Nvidia GPUs, but for drone warfare you need small, cheap chips, right?
Yes, you don’t need a super sophisticated huge model installed on that small little chip and that small little drone — especially if it’s a kamikaze drone or a bomber. Most of the time it’s a one-way ticket. There’s no point in installing something really sophisticated, cool, and expensive on something that you use once. So, smaller models, simpler models, smaller chips, cheaper chips — that's how you create a kind of balance between efficiency and cost.
We’re far away from killer robots, but what are the current ethical questions that Ukraine is — or should be — grappling with in regard to AI on the battlefield?
I'll be very honest and open with you. Ukraine doesn’t put ethical questions as the first priority, and for this exact reason, they don’t have any regulation limiting defense and military applications of AI because what they currently need is something very efficient that can kill Russians. That’s almost an official position. On the other hand, when we’re talking about the technology, its development, and how much you can rely on it, this is where Ukraine still sees a problem. All branches of the military that I've been talking with — deep strikes, tactical level, everyone — are saying we don’t trust technology yet, so humans have to be in the loop.
Even when they combine different functions, you can install a chip that has a model for target recognition and then another chip with a model that enables the drone to fly autonomously. So basically, this drone can be autonomous. It can find the target and let’s say it identifies the target that makes a decision to strike and to engage it. But they don’t allow this to be fully autonomous because the number of mistakes and false positives is still way too high to trust technology, so the common kind of vision is the human has to be able to intervene and stop whatever it is from striking or executing the mission.
So, there is a common vision — without any formal strategy or document on an official level, no legislation or regulation. There is only a white paper, released by the Minister of Digital Transformation — it doesn’t have any law power, and it’s just kind of them sharing their vision. It says we aim not to limit military AI and we want to comply with international law and regulations, which also has a lot of contradictions. Yeah, we want to be compliant with all international legislation and laws, which don’t exist, and in the meantime, inside the country, we won’t stop anyone from developing autonomous weapons.
“Human-in-the-loop” is often an ethical term, meaning that systems shouldn’t be making decisions of war autonomously. But you’re saying that it’s also a strategic necessity for Ukraine right now.
Yeah, and more like a safety measure because there are cases when the object recognition and classification went wrong and a Ukrainian soldier was classified as a Russian soldier. Nobody was killed, but they saw this mistake and they’re like, “Okay, we cannot delegate these decisions to a machine.” So it can help to classify the objects it sees, but the final decision and final confirmation is still made by a human — just more from a safety standpoint rather than an ethical one.
What does the near future hold for AI in warfare — even before killer robots?
I think the next step is more autonomy: increasing the number of autonomous functions, but still keeping humans in the loop. I’m not even talking about sci-fi swarms of drones. I’m talking about systems being able to make decisions collaboratively and talk to each other. For example, aerial drones and ground systems that can communicate and observe and understand what’s happening and decide how to better execute this mission. Rather than launching thousands of drones and displaying this cool swarm flying in the sky, in practice people are a very limited and expensive resource. And that’s why operations and missions will become less and less manned and humans will be removed from the direct battlefield and they will be replaced with robots. So more autonomy in robots themselves and more communication and decision-making among different unmanned, uncrewed systems — that’s what I would say is the nearest future on the battlefield.
Liang Wenfeng, founder of startup DeepSeek, delivers a speech at the 10th China Private Equity Golden Bull Awards in 2019 in Shanghai, China.
DeepSeek says no to outside investment — for now
DeepSeek founder Liang Wenfeng has shrugged off hungry requests to invest in the Chinese artificial intelligence startup, according to a Monday report in the Wall Street Journal.
The company’s AI models are open-source and free to use, and Liang is reportedly hesitant to take any outside investment that could change that.
The chatbot company has amassed millions of users while drawing scrutiny from global regulators concerned about a Chinese company handling sensitive user data. Chinese tech giants Tencent and Alibaba are among those that have tried to invest in DeepSeek. The company is effectively controlled by Liang through his hedge fund High-Flyer, though the hedge fund spun off DeepSeek as an independent company when it was created in 2023.
For now, DeepSeek will continue to fly solo. But with pressure from investors, who knows how long Liang will be willing to hold on to the enterprise by himself?
Palantir logo is seen in this illustration.
Palantir delivers two key AI systems to the US Army
The Peter Thiel-founded artificial intelligence company Palantir said on Friday that it’s rolling out its first two AI systems to the US Army.
Palatir won a $178 million contract in March 2024. It has promised to deliver 10 total systems for this contract, all under the name TITAN, the Tactical Intelligence Targeting Access Node. TITAN is a mobile ground station meant to use AI to assist with warfare strategy and strike targeting.
Palantir, largely a software-focused company, isn’t executing the contract alone. It partnered with Northrop Grumman and L3Harris, as well as the autonomous systems startup Anduril, founded by Palmer Luckey, to help with TITAN. The systems are packed into the backs of trucks, allowing soldiers to make AI-informed decisions wherever they are.
The US Department of Defense has ramped up its procurement of AI technology in the past year. On Wednesday, the Pentagon announced a new multimillion-dollar contract with Scale AI for a project called Thunderforge to assist military decision-making. With China’s increased AI ambitions staring the United States in the face, expect military contracts for AI to only increase in the months to come.
The Justice Department ends its attempt to make Google sell its AI
On Friday, the US Department of Justice ended its bid to make Google sell off its stakes in Anthropic.
The government, which won a crucial antitrust ruling against Google's illegal monopoly in its search engine business in August, is still seeking to break up the company, one of the world’s largest firms. But it will now focus on trying to force Google to sell off its web browser Chrome.
In addition to its own in-house AI development, spurred by the 2014 acquisition of the British lab DeepMind, Google has in recent years invested $3 billion in Anthropic, which makes the chatbot Claude. That’s good for a 10% stake in the OpenAI competitor. Anthropic told the judge that losing Google’s investment would hand a strategic advantage to OpenAI and its main investor Microsoft. Google is not the only Silicon Valley investor in the firm; Amazon has also invested $8 billion in the startup.
The DeepSeek logo is displayed on three cell phones in front of a computer screen showing the Chinese national flag.
China announces a state-backed AI fund
Chinese officials on Thursday announced a new state fund to invest in cutting-edge technology, including artificial intelligence.
Zheng Shanjie, chairman of the National Development and Reform Commission, the country’s economic planning agency, told reporters that the “state venture capital guidance fund” will bring in $138 billion over 20 years from local governments and private firms.
The fund comes two months after the Chinese firm DeepSeek unveiled its R1 artificial intelligence model, which quickly became one of the world’s top-performing systems — essentially China’s first model that can compete with those from Silicon Valley firms like Anthropic, Google, Meta, and OpenAI. Since then, Chinese tech giant Alibaba has released its own model, called QwQ-32B, to rival DeepSeek and other major players.
The United States has tried to cut off China from its top AI chip companies, but the country still has managed to build AI models — either from smuggled chips or through, as DeepSeek claimed, efficient new means. A new and emboldened China now wants the world to know it’s ready to publicly write big checks to spur its domestic AI industry.
Hard Numbers: Seeking employees with AI skills, Cursor’s cash, Kremlin lies infiltrate chabots, Singapore’s aging population, VC’s spending spree, OpenAI ❤️s CoreWeave
25: Nearly 25% of all US technology jobs posted so far this year have sought employees with artificial intelligence-related skills. That number was higher for IT jobs, of which 36% in January requested comprehension of AI. It seems white-collar employees will need some proficiency with AI tools in the years ahead.
10 billion: Anysphere, the startup behind the AI coding assistant Cursor, is in talks to raise money at a $10 billion valuation, according to a report in Bloomberg on Friday. Talks of a new funding round come only three months after it last raised capital in December 2024 — $100 million based on a $2.5 billion valuation. AI assistants like Cursor, Devin, and Windsurf have become popular ways for software developers to code in recent months
3.6 million: The misinformation monitoring company NewsGuard found 3.6 million articles from 2024 featuring Russian propaganda in the training sets of 10 leading AI chatbots, including OpenAI’s ChatGPT, Google’s Gemini, and xAI’s Grok, plus Perplexity’s AI search engine. A Kremlin network called Pravda, it says, has deliberately tried to flood search results and web crawlers with “pro-Kremlin falsehoods,” many of which are about Ukraine.
25: By 2030, 25% of Singaporeans will be 65 or older. That’s up from just 10% in 2010, a massive demographic shift for the small but bustling Asian country. Singapore’s government and companies are deploying AI solutions, including in-home fall-detection systems and patient-monitoring systems in hospitals. Without some technological assistance, the country will need to hire an additional 6,000 nurses and elder care workers annually to meet its aging population’s needs.
30 billion: Investment firms have poured $30 billion into US startups this quarter, according to new data from Pitchbook data reported by the Financial Times on Sunday. Another $50 billion is reportedly on the way with Anduril, OpenAI, and other companies raising money. This could be the biggest year for US venture capital since 2021 when it spent $358 billion.
12 billion: OpenAI has signed a five-year $12 billion contract with the cloud computing startup CoreWeave. The ChatGPT maker will take a $350 million equity stake in CoreWeave, which specializes in cloud-based processors for AI companies, ahead of its planned IPO, which is expected in the coming weeks.
Did Biden’s chip rules go too far?
The “AI Diffusion Rule,” which was announced on Jan. 13 by the last administration, divides countries into three tiers with varying restrictions on American AI chip imports. While close allies like Canada and the UK face few limits, many partners, including India, Switzerland, and Israel, fall into the second tier with significant restrictions on how many chips they can order. A third tier of rivals like China and Russia are completely cut off.
Microsoft’s critique
Microsoft Vice Chair and President Brad Smith didn’t mince words. The rule “undermines” US AI leadership and will ultimately give “China a strategic advantage,” he wrote in a blog post. Smith argued that the rule’s restrictions on allies would backfire, forcing countries to look elsewhere for AI infrastructure — likely to China. While Microsoft waited until now, Nvidia criticized the rule immediately after it was announced, saying that it “threatens to derail innovation and economic growth worldwide.”
The goal of Biden’s export controls has been clear: prevent China from accessing cutting-edge AI infrastructure needed to train and deploy top models while maintaining sales to friendly markets. While Biden’s chip controls began in 2022, the AI Diffusion Rule represents the broadest attempt to prevent advanced computing power from reaching China.
What the Diffusion rule accomplishes
Xiaomeng Lu, director of geo-technology at Eurasia Group, sees the rule as “a move to alienate US allies and partners.” While the Trump administration might tighten rules for China, it could potentially relax them for other countries, she says.
Jeremy Mark, a nonresident senior fellow at the Atlantic Council's GeoEconomics Center, said the implementation seemed rushed. ”As with any wide-reaching policy that is put together in a rush, there will be unintended consequences.”
Jacob Feldgoise, a data research analyst at Georgetown University’s Center for Security and Emerging Technology, questions the primary assumptions of all of Biden’s export controls on chips. “They assume that compute scaling will continue and that algorithmic improvements can’t substitute for compute,” he said. “If those assumptions break down, the controls will further struggle to control the spread of AI capabilities.”
The loopholes in the plan
The Biden export controls haven’t worked as expected. The Chinese company DeepSeek has claimed that it has trained an industry-standard model with much fewer chips than top US labs, though the US is currently investigating whether it had access to restricted chips.
Meanwhile, Chinese buyers have been circumventing the export rules anyway. On Sunday, the Wall Street Journal published an investigation that found Chinese firms ordering Nvidia’s Blackwell AI chips through third parties in neighboring countries. And underground markets across China have long sold Nvidia chips sourced from unknown places.
What Trump could do
Mark said thatTrump could “tighten restrictions on technology sales to China even more than Biden” but said it’s impossible to tell what will come through as policy and what is posturing for future negotiations.
Feldgoise believes further tightening on China is likely, but notes that softening the policy on other countries could undermine that effort. “The challenge with loosening controls on other countries is that doing so would likely undermine the administration's objective of cracking down on chip smuggling to China.”
Domestic chip production and cutting off China will likely remain priorities under Trump, continuing two rare areas of bipartisan agreement. Silicon Valley has been ingratiating itself with the Trump administration in recent months and, on this front, hopes that deregulation in key areas could clear the way for better sales around the world.Europol headquarters in The Hague, Netherlands.
Europe’s AI deepfake raid
Europol, the European Union’s law enforcement agency, arrested 24 people across 19 countries last Wednesday in a global crackdown on AI-generated child pornography. The arrests stretched beyond the EU with suspects taken into custody in Australia, the United Kingdom, and New Zealand in coordination with local police.
The crackdown is part of a campaign called Operation Cumberland, which began in November with the arrest of a lead suspect in Denmark. The ringleader allegedly ran a website where people paid to access images of children that he created with help from artificial intelligence. Europol wrote in a press release that there are 273 total suspects, and they’ve done 33 house searches and seized 173 electronic devices.
“Operation Cumberland has been one of the first cases involving AI-generated child sexual abuse material (CSAM), making it exceptionally challenging for investigators, especially due to the lack of national legislation addressing these crimes,” Europol wrote in a statement.
The agency noted that EU member states are currently discussing regulations specifically addressing this type of content, so it’s unclear what the exact legal basis for the arrests is. (Europol did not respond to a request for comment by press time.) Nick Reiners, a senior geo-technology analyst at Eurasia Group, said he believes the legal basis would be national laws that do not distinguish CSAM from AI-generated CSAM. That said, there’s good reason for a new EU law: “The objective of the proposed new Directive is primarily to harmonize, update and strengthen national laws across EU member states, in part to make it easier to prosecute,” Reiners added.
The agency has said that more arrests are on the way in the coming weeks.