Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
AI and war: Governments must widen safety dialogue to include military use
There's not a week without a new announcement of a new AI office, AI safety institute, or AI advisory body initiated by a government, usually the democratic governments of this world. They're all wrestling with, “How to regulate AI,” and seem to choose, without much variation, for a focus on safety.
Last week we saw the Department of Homeland Security in the US joining this line of efforts with its own advisory body. Lots of industry representatives, some from academia and civil society, to look at safety of AI in its own context. And what's remarkable amidst all this focus on safety is how little emphasis and even attention there is for restricting or putting guardrails around the use of AI in the context of militaries.
And that is remarkable because we can already see the harms of overreliance on AI, even if industry is really pushing this as its latest opportunity. Just look at venture capital poured into defense tech or “DefTech” as it's popularly called. And so, I think we should push for a widening of the lens when we talk about AI safety to include binding rules on military uses of AI. The harms are real. It's about life and death situations. Just imagine somebody being misidentified as a legitimate target for a drone strike, or the kinds of uses that we see in Ukraine where facial recognition tools, other kinds of data, crunching AI applications, are used in the battlefield without many rules around it, because the fog of war also makes it possible for companies to kind of jump into the void.
So it is important that safety of AI at least includes the focus and discussion on what is proper use of AI in the context of war, combat, and conflict, of which we see too much in today's world, and that there are rules in place initiated by democratic countries to make sure that the rules based order, international law, and human rights humanitarian law is upheld even in the context of the latest technologies like AI.
- Russia-Ukraine war: How we got here ›
- Robots are coming to a battlefield near you ›
- AI explosion, elections, and wars: What to expect in 2024 ›
- Biden & Xi set to agree on regulating military use of AI ›
- Ukraine’s AI battlefield ›
- Will AI further divide us or help build meaningful connections? - GZERO Media ›
- How neurotech could enhance our brains using AI - GZERO Media ›
Ukraine’s AI battlefield
Saturday marks the two-year anniversary of Russia’s invasion of Ukraine.
Over the course of this bloody war, the Ukrainian defense strategy has grown to a full embrace of cutting-edge artificial intelligence. Ukraine has been described as a “living lab for AI warfare.”
That capability comes largely from the American government but also from American industry. With the help of powerful American tech companies such as Palantir and Clearview AI, Ukraine has deployed AI throughout its military operations. The biggest tech companies have been involved, too; Amazon, Google, Microsoft, and Elon Musk’s Starlink have also provided vital tech to aid Ukraine’s war effort.
Ukraine is using AI to analyze large data sets stemming from satellite imagery, social media, and drone footage, but also supercharging its geospatial intelligence and electronic warfare efforts. AI-powered facial recognition and other imagery technology has been instrumental in identifying Russian soldiers, collecting evidence of war crimes, as well as locating land mines.
And increasingly, weapons are also powered by AI. According to a new report from Bloomberg, US and UK leaders are providing AI-powered drones to Ukraine, which would fly in large fleets, coordinating with one another to identify and take out Russian targets. There is no shortage of ethical concerns about the nature of AI-powered warfare, as we have written about in the past, but that hasn’t stymied President Joe Biden’s commitment to beating back Vladimir Putin and defending a strategically crucial ally.
Reports about Russia’s own use of AI in warfare are murkier, though there’s some evidence to suggest they may be using the technology to fuel disinformation campaigns as well as build weaponry. But Ukraine might have an advantage: Recently, Russia’s fancy new AI-powered drone-killing system was reportedly blown up by, of all things, a Ukrainian drone.
Ukraine’s stand against Russia has been called a David and Goliath story, but it’s also a battle evened by technological prowess. It’s a view into the future of warfare, where the full strength of Silicon Valley and the US military-industrial complex meet.QUAD supply chain strategy to consider values; new AI-powered weapons
Marietje Schaake, International Policy Director at Stanford's Cyber Policy Center, Eurasia Group senior advisor and former MEP, discusses trends in big tech, privacy protection and cyberspace:
How will the QUAD leaders address the microchip supply chain issue during their meeting this week?
Well, the idea for leaders of the US, Japan, India, and Australia, is to collaborate more intensively on building secure supply chains for semiconductors, and that is in response to China's growing assertiveness. I think it's remarkable to see that values are becoming much more clearly articulated by world leaders when they're talking about governing advanced technologies. The current draft statement ahead of the QUAD meeting says that collaboration should be based on the rule of respecting human rights.
Will AI dominate the future battlefields?
Well, we've already seen new uses of AI-powered arms, but also new opportunities for cyberattacks from the increased use of AI, which leads to growing and vulnerable attack surface. The New York Times recently investigated how Iran's top nuclear official was executed with an AI-assisted, remote-controlled killing device. The gun, equipped with intelligent satellite systems, used AI to verify when and at whom to fire the lethal shots. So there are new weapons, but also new opportunities to exploit vulnerabilities in AI. It is safe to say that warfare is already changing and that in many ways, conflict and cyberattacks, as a result of both, the specific use in arms as well as the broad uptake in society will change dramatically.