Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Robots are coming to a battlefield near you
Artificial intelligence is revolutionizing everything – from education, health care, and banking, to how we wage war. By simplifying military tasks, improving intelligence-gathering, and fine-tuning weapons accuracy — all of which could make wars less deadly – AI is redefining our concept of modern military might.
At its most basic level, militaries around the world are harnessing AI to train algorithms that can make their work faster and more effective. Today, it is used for image recognition, cyber warfare, strategic planning, logistics, bomb disposal, command and control, and more.
But there’s also plenty of debate over whether this could lead to killer robots and an apocalyptic endgame. Science fiction offers plenty of images of this – from Isaac Asimov’s rogue robots, the “Terminator” and Skynet, to Matthew Broderick racing to stop a supercomputer from unleashing nukes in “War Games.” Can we have less deadly wars without robots taking over the world?
Much of the concern about the future centers on lethal autonomous weapons, aka LAWs or killer robots, which are military tools that can target and engage in combat without human intervention. The weapons can be programmed to seek and destroy without a human steering them. LAWs could eventually become commonplace in war, and while critics have long campaigned to ban them and halt their development, militaries around the globe are exploring and testing this technology.
The US military, for example, is reportedly using an AI-powered quadcopter in operations, and early this year, the Air Force gave AI the controls of an F-16 for 17 hours.
During the first AUKUS AI and autonomy trial this spring, the UK tested a collaborative swarm of drones, which were able to detect and track military targets. And the US has reportedly developed a “pilotless” XQ-58A Valkyrie drone it hopes will “become a potent supplement to its fleet of traditional fighter jets, giving human pilots a swarm of highly capable robot wingmen to deploy in battle.” While the AI will help identify the targets, humans will still need to sign off before they shoot – at least for now.
Samuel Bresnick, a research fellow at Georgetown University's Center for Security and Emerging Technology, says the potential uses of AI permeate all aspects of the military. AI can help the military “sift through huge amounts of information and pick out patterns,” he says, and this is already happening across the military’s intelligence, surveillance, and reconnaissance systems.
AI can also be used for advanced image recognition to aid military targeting. “For example, if the US has millions of hours of drone footage from the wars in the Middle East,” he says, “[they] can use that as training data for AI algorithms.”
AI can also help militaries plan hypersonic or ballistic missile trajectories — China reportedly used AI to develop a defensive system to detect such missiles.
There are innumerable other uses too, such as advancing cyber-espionage efforts and simplifying command-and-control decision-making, but the way militaries use AI is already garnering pushback and concern. Just last week, a group of 200 people working in AI signed an open letter condemning Israel’s use of “AI-driven technologies for warmaking, in which the aim is to make the loss of human life more efficient.”
World leaders like US President Joe Biden and Chinese President Xi Jinping are likewise concerned about the global adoption of AI-infused military tech, but that’s not slowing down their own efforts to gear up and gain a strategic advantage over one another.
•••
As the US ramps up its military capabilities, it is doing so as part of an AI arms race with China.
Last week, Biden and Xi met at the Asia-Pacific Economic Cooperation summit in San Francisco, where they talked about artificial intelligence (among other things). The two world leaders “agreed to a dialogue to keep the [AI] from being deployed in ways that could destabilize global security.”
As AI becomes increasingly intertwined with their countries’ military ambitions and capabilities, Biden and Xi appear interested in keeping one another in check but are not in any rush to sign agreements that would prevent themselves from gaining a technological advantage over the other. “Both of these militaries want desperately to develop these technologies because they think it’s going to be the next revolution in military affairs,” Bresnick said. “Neither one is going to want to tie their hands.”
Justin Sherman, a senior fellow at Duke University’s Sanford School of Public Policy and founder of Global Cyber Strategies, said he is concerned that AI could become the center of an arms race with no known endpoint.
“Thinking of it as a race …could potentially lead the US more toward an approach where AI systems are being built that really, as a democracy, it should not be building — or should be more cautious about building — but [they] are being built out of this fear that a foreign state might do what we do not,” Sherman said.
But with AI being a large suite of technologies, and one that’s evolving incredibly quickly, there’s no way to know where the race actually ends.
As AI plays an increasing role in the military destinies of both countries, Sherman says, there’s a risk of “the US and China constantly trying to one-up each other in the latest and greatest, and the most lethal technology just becomes more and more dangerous over time.”