Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Ukraine's tech use against Russia is revolutionizing warfare
The war in Ukraine is completely changing modern warfare. Armies increasingly rely on technology like drones and cyber intelligence instead of tanks and artillery to achieve military goals. On GZERO World with Ian Bremmer, Former NATO Supreme Allied Commander Admiral James Stavridis says warfare is “shapeshifting in front of our eyes” in Ukraine.
On the same battlefield, soldiers are digging WWI-style trenches while also using artificial intelligence and unmanned systems. These new technologies have allowed Ukraine, a country without an army, to take down Russia’s flagship missile-guided cruiser in the Black Sea. The US is learning battle-tested strategies from Ukraine’s army, and Stavridis predicts that in the next four years, we’ll see much less military spending on armies and personnel. Instead, the focus will shift to new technology and the experts who can deploy it.
“This is the new triad of warfare,” Stravridis says, “It’s unmanned systems, cyber and artificial intelligence, and special forces.”
Watch the full interview with Admiral Stavridis on this episode of GZERO World: The future of modern warfare
GZERO World with Ian Bremmer, the award-winning weekly global affairs series, airs nationwide on US public television stations (check local listings).
New digital episodes of GZERO World are released every Monday on YouTube. Don''t miss an episode: subscribe to GZERO's YouTube channel and turn on notifications (🔔).
The future of modern warfare
Technology in Ukraine is transforming the battlefield in real time. How will it change the US national security strategy? And could what's happening in Ukraine shift China’s President Xi Jinping’s future plans in Taiwan? Former NATO Supreme Allied Commander Admiral James Stravridis joins Ian Bremmer on GZERO World to talk about how technology is creating a “new triad” of warfare, i.e., unmanned systems, cyber and artificial intelligence, and special forces.
Modern conflict no longer requires huge standing armies to fight effectively; just look at Ukraine’s success in the Black Sea. Smaller militaries are increasingly using drones, satellites, and unmanned systems against larger armies. Stavridis says Taiwan is a “resistance fighter’s dream” because of its geography and resources. Plus, it manufactures about half of the world’s computer chips, which China relies on for its technology infrastructure. But Stavridis also warns the same technology is empowering malefactors and terrorist groups, creating dangerous asymmetrical warfare.
“The US will continue to be the preeminent nation at projecting power. China will make a play to do it. Russia, the lights are going to go out,” the Admiral says, “But it’s acts of terrorism and the ability to use weapons of mass disruption, that’s what you need to worry about.”
For more on technology and the transformation of war, check out Admiral Stavridis’ book "2054: A Novel". His newest book, "The Restless Wave", a historical novel about the rise of new technology in the Pacific during WWII, is out October 8.
GZERO World with Ian Bremmer, the award-winning weekly global affairs series, airs nationwide on US public television stations (check local listings).
New digital episodes of GZERO World are released every Monday on YouTube. Don''t miss an episode: subscribe to GZERO's YouTube channel and turn on notifications (🔔).
AI and war: Governments must widen safety dialogue to include military use
There's not a week without a new announcement of a new AI office, AI safety institute, or AI advisory body initiated by a government, usually the democratic governments of this world. They're all wrestling with, “How to regulate AI,” and seem to choose, without much variation, for a focus on safety.
Last week we saw the Department of Homeland Security in the US joining this line of efforts with its own advisory body. Lots of industry representatives, some from academia and civil society, to look at safety of AI in its own context. And what's remarkable amidst all this focus on safety is how little emphasis and even attention there is for restricting or putting guardrails around the use of AI in the context of militaries.
And that is remarkable because we can already see the harms of overreliance on AI, even if industry is really pushing this as its latest opportunity. Just look at venture capital poured into defense tech or “DefTech” as it's popularly called. And so, I think we should push for a widening of the lens when we talk about AI safety to include binding rules on military uses of AI. The harms are real. It's about life and death situations. Just imagine somebody being misidentified as a legitimate target for a drone strike, or the kinds of uses that we see in Ukraine where facial recognition tools, other kinds of data, crunching AI applications, are used in the battlefield without many rules around it, because the fog of war also makes it possible for companies to kind of jump into the void.
So it is important that safety of AI at least includes the focus and discussion on what is proper use of AI in the context of war, combat, and conflict, of which we see too much in today's world, and that there are rules in place initiated by democratic countries to make sure that the rules based order, international law, and human rights humanitarian law is upheld even in the context of the latest technologies like AI.
- Russia-Ukraine war: How we got here ›
- Robots are coming to a battlefield near you ›
- AI explosion, elections, and wars: What to expect in 2024 ›
- Biden & Xi set to agree on regulating military use of AI ›
- Ukraine’s AI battlefield ›
- Will AI further divide us or help build meaningful connections? - GZERO Media ›
- How neurotech could enhance our brains using AI - GZERO Media ›
Israel's Lavender: What could go wrong when AI is used in military operations?
So last week, six Israeli intelligence officials spoke to an investigative reporter for a magazine called +972 about what might be the most dangerous weapon in the war in Gaza right now, an AI system called Lavender.
As I discussed in an earlier video, the Israeli Army has been using AI in their military operations for some time now. This isn't the first time the IDF has used AI to identify targets, but historically, these targets had to be vetted by human intelligence officers. But according to the sources in this story, after the Hamas attack of October 7th, the guardrails were taken off, and the Army gave its officers sweeping approval to bomb targets identified by the AI system.
I should say that the IDF denies this. In a statement to the Guardian, they said that, "Lavender is simply a database whose purpose is to cross-reference intelligence sources." If true, however, it means we've crossed a dangerous Rubicon in the way these systems are being used in warfare. Let me just frame these comments with the recognition that these debates are ultimately about systems that take people's lives. This makes the debate about whether we use them, or how we use them, or how we regulate them and oversee them, both immensely difficult, but also urgent.
In a sense, these systems and the promises that they're based on are not new. Technologies like Palantir have long promised clairvoyance from more and more data. At their core, these systems all work in the same way, users upload raw data into them, in this case, the Israeli army loaded in data on known Hamas operatives, location data, social media profiles, cell phone information, and then these data are used to create profiles of other potential militants.
But of course, these systems are only as good as the training data that they are based on. One source who worked with the team that trained Lavender said that, "Some of the data they used came from the Hamas-run Internal Security Ministry, who aren't considered militants." The source said that, "Even if you believe these people are legitimate targets, by using their profiles to train the AI system, it means the system is more likely to target civilians." And this does appear to be what's happening. The sources say that, "Lavender is 90% accurate," but this raises profound questions about how accurate we expect and demand these systems to be. Like any other AI system, Lavender is clearly imperfect, but context matters. If ChatGPT hallucinates 10% of the time, maybe we're okay with that. But if an AI system is targeting innocent civilians for assassination 10% of the time, most people would likely consider that an unacceptable level of harm.
With the rise of AI systems in the workplace, it seems like an inevitability that militaries around the world will begin to adopt technologies like Lavender. Countries around the world, including the US, have set aside billions for AI-related military spending, which means we need to update our international laws for the AI age as urgently as possible. We need to know how accurate these systems are, what data they're being trained on, how their algorithms are identifying targets, and we need to oversee the use of these systems. It's not hyperbolic to say that new laws in this space will literally be the difference between life and death.
I'm Taylor Owen, and thanks for watching.