Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
In Ukraine’s AI-enabled war against Russia, humans still call the shots
Kateryna Bondar wants you to know that “killer robots” aren’t deployed on the battlefield in Ukraine — at least not yet. Bondar, a fellow with the Wadhwani AI Center at the Center for Strategic and International Studies, is a former advisor to the government of Ukraine, where she worked on defense and innovation, among other things.
In a new report for CSIS, Bondar seeks to dispel myths about the AI-enabled battlefield in Ukraine’s war against Russia, separating on-the-ground realities from visions of science fiction. Bondar spoke to GZERO’s Scott Nover about Ukraine’s capabilities, its reliance on foreign powers, ethical considerations about autonomous systems, and what’s next for AI in warfare.
This interview has been edited for clarity and length.
Scott Nover: Why did you want to write this report?
Kateryna Bondar: I worked in the Ukrainian Ministry of Defense, and have a lot of connections with the Ukrainian military, so I’m pretty familiar with what is happening there. And when I discuss technology and what is happening on the front lines with people there, they’re like, “Come on, we’re so far from real killer robots and AI wars. And when we read all those articles, it’s good that people think that Ukraine is so advanced, but the reality is not like that.” So the actual goal of this report is to objectively evaluate the state of AI in the war. Full autonomy is really far from actual deployment on the battlefield and the front lines. There is autonomy there, but it’s very partial with separate functions like autonomous navigation and automatic target recognition. These pieces of autonomy exist and they’re deployed on the front lines, but they are not fully autonomous systems.
How does Ukraine stack up against Russia technologically?
Ukraine is still more advanced. Before the war, there were a lot of Ukrainian engineers outsourcing for US companies. The Ukrainian talent pool is bigger. The Ukrainian patriotic movement was way bigger than in Russia, which motivated a lot of software engineers to join the army. When I talk to people who track Russian technology development — we both agree that Ukraine is still leading. Of course, it’s a constant race, but, for now, Ukraine is leading in terms of software development. And also what is important is that Ukraine — I hope and I think — finally realized this competitive advantage and they are really pushing on software development and deployment through procurement of AI-enabled drones. When I talk to the Ukrainian military and specifically Ukrainian Unmanned System Forces, a separate branch that they created, they say that they currently conduct about 80% of their strikes with drones, which I think is an impressive number. Drones can replace conventional weapon systems — but not completely. Of course, they need artillery, but it shows that it’s possible, and I think this is something really innovative and impressive, what’s happening on the front line in Ukraine.
You mentioned the West. With tensions bubbling up between Ukraine and the Trump administration, I’m wondering: How self-sufficient is Ukraine?
Ukraine is capable of producing its own drones right now — the supply chain is established. It’s a bit more expensive than buying components from abroad, especially from China, but Ukraine had to deal with this even before the situation with the United States. They were mostly using Chinese components, and China put export control limits on selling components to Ukrainians. That was the main motivation why Ukraine started creating its own supply chain to be able to build its own drones. The only components that Ukraine cannot produce right now by itself are chips and electronics. China is the best for those because no one can compete with Chinese prices, unfortunately. So Silicon Valley and US producers are not very competitive here. But Ukrainians are getting Chinese components and US components, basically anything that they can get and that is cost-efficient. So yeah, Ukraine is moving towards being self-sufficient, but in terms of chips and electronics, they still rely on external components.
When we talk about AI and chips, we’re usually talking about expensive Nvidia GPUs, but for drone warfare you need small, cheap chips, right?
Yes, you don’t need a super sophisticated huge model installed on that small little chip and that small little drone — especially if it’s a kamikaze drone or a bomber. Most of the time it’s a one-way ticket. There’s no point in installing something really sophisticated, cool, and expensive on something that you use once. So, smaller models, simpler models, smaller chips, cheaper chips — that's how you create a kind of balance between efficiency and cost.
We’re far away from killer robots, but what are the current ethical questions that Ukraine is — or should be — grappling with in regard to AI on the battlefield?
I'll be very honest and open with you. Ukraine doesn’t put ethical questions as the first priority, and for this exact reason, they don’t have any regulation limiting defense and military applications of AI because what they currently need is something very efficient that can kill Russians. That’s almost an official position. On the other hand, when we’re talking about the technology, its development, and how much you can rely on it, this is where Ukraine still sees a problem. All branches of the military that I've been talking with — deep strikes, tactical level, everyone — are saying we don’t trust technology yet, so humans have to be in the loop.
Even when they combine different functions, you can install a chip that has a model for target recognition and then another chip with a model that enables the drone to fly autonomously. So basically, this drone can be autonomous. It can find the target and let’s say it identifies the target that makes a decision to strike and to engage it. But they don’t allow this to be fully autonomous because the number of mistakes and false positives is still way too high to trust technology, so the common kind of vision is the human has to be able to intervene and stop whatever it is from striking or executing the mission.
So, there is a common vision — without any formal strategy or document on an official level, no legislation or regulation. There is only a white paper, released by the Minister of Digital Transformation — it doesn’t have any law power, and it’s just kind of them sharing their vision. It says we aim not to limit military AI and we want to comply with international law and regulations, which also has a lot of contradictions. Yeah, we want to be compliant with all international legislation and laws, which don’t exist, and in the meantime, inside the country, we won’t stop anyone from developing autonomous weapons.
“Human-in-the-loop” is often an ethical term, meaning that systems shouldn’t be making decisions of war autonomously. But you’re saying that it’s also a strategic necessity for Ukraine right now.
Yeah, and more like a safety measure because there are cases when the object recognition and classification went wrong and a Ukrainian soldier was classified as a Russian soldier. Nobody was killed, but they saw this mistake and they’re like, “Okay, we cannot delegate these decisions to a machine.” So it can help to classify the objects it sees, but the final decision and final confirmation is still made by a human — just more from a safety standpoint rather than an ethical one.
What does the near future hold for AI in warfare — even before killer robots?
I think the next step is more autonomy: increasing the number of autonomous functions, but still keeping humans in the loop. I’m not even talking about sci-fi swarms of drones. I’m talking about systems being able to make decisions collaboratively and talk to each other. For example, aerial drones and ground systems that can communicate and observe and understand what’s happening and decide how to better execute this mission. Rather than launching thousands of drones and displaying this cool swarm flying in the sky, in practice people are a very limited and expensive resource. And that’s why operations and missions will become less and less manned and humans will be removed from the direct battlefield and they will be replaced with robots. So more autonomy in robots themselves and more communication and decision-making among different unmanned, uncrewed systems — that’s what I would say is the nearest future on the battlefield.