Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Robots are coming to a battlefield near you
Artificial intelligence is revolutionizing everything – from education, health care, and banking, to how we wage war. By simplifying military tasks, improving intelligence-gathering, and fine-tuning weapons accuracy — all of which could make wars less deadly – AI is redefining our concept of modern military might.
At its most basic level, militaries around the world are harnessing AI to train algorithms that can make their work faster and more effective. Today, it is used for image recognition, cyber warfare, strategic planning, logistics, bomb disposal, command and control, and more.
But there’s also plenty of debate over whether this could lead to killer robots and an apocalyptic endgame. Science fiction offers plenty of images of this – from Isaac Asimov’s rogue robots, the “Terminator” and Skynet, to Matthew Broderick racing to stop a supercomputer from unleashing nukes in “War Games.” Can we have less deadly wars without robots taking over the world?
Much of the concern about the future centers on lethal autonomous weapons, aka LAWs or killer robots, which are military tools that can target and engage in combat without human intervention. The weapons can be programmed to seek and destroy without a human steering them. LAWs could eventually become commonplace in war, and while critics have long campaigned to ban them and halt their development, militaries around the globe are exploring and testing this technology.
The US military, for example, is reportedly using an AI-powered quadcopter in operations, and early this year, the Air Force gave AI the controls of an F-16 for 17 hours.
During the first AUKUS AI and autonomy trial this spring, the UK tested a collaborative swarm of drones, which were able to detect and track military targets. And the US has reportedly developed a “pilotless” XQ-58A Valkyrie drone it hopes will “become a potent supplement to its fleet of traditional fighter jets, giving human pilots a swarm of highly capable robot wingmen to deploy in battle.” While the AI will help identify the targets, humans will still need to sign off before they shoot – at least for now.
Samuel Bresnick, a research fellow at Georgetown University's Center for Security and Emerging Technology, says the potential uses of AI permeate all aspects of the military. AI can help the military “sift through huge amounts of information and pick out patterns,” he says, and this is already happening across the military’s intelligence, surveillance, and reconnaissance systems.
AI can also be used for advanced image recognition to aid military targeting. “For example, if the US has millions of hours of drone footage from the wars in the Middle East,” he says, “[they] can use that as training data for AI algorithms.”
AI can also help militaries plan hypersonic or ballistic missile trajectories — China reportedly used AI to develop a defensive system to detect such missiles.
There are innumerable other uses too, such as advancing cyber-espionage efforts and simplifying command-and-control decision-making, but the way militaries use AI is already garnering pushback and concern. Just last week, a group of 200 people working in AI signed an open letter condemning Israel’s use of “AI-driven technologies for warmaking, in which the aim is to make the loss of human life more efficient.”
World leaders like US President Joe Biden and Chinese President Xi Jinping are likewise concerned about the global adoption of AI-infused military tech, but that’s not slowing down their own efforts to gear up and gain a strategic advantage over one another.
•••
As the US ramps up its military capabilities, it is doing so as part of an AI arms race with China.
Last week, Biden and Xi met at the Asia-Pacific Economic Cooperation summit in San Francisco, where they talked about artificial intelligence (among other things). The two world leaders “agreed to a dialogue to keep the [AI] from being deployed in ways that could destabilize global security.”
As AI becomes increasingly intertwined with their countries’ military ambitions and capabilities, Biden and Xi appear interested in keeping one another in check but are not in any rush to sign agreements that would prevent themselves from gaining a technological advantage over the other. “Both of these militaries want desperately to develop these technologies because they think it’s going to be the next revolution in military affairs,” Bresnick said. “Neither one is going to want to tie their hands.”
Justin Sherman, a senior fellow at Duke University’s Sanford School of Public Policy and founder of Global Cyber Strategies, said he is concerned that AI could become the center of an arms race with no known endpoint.
“Thinking of it as a race …could potentially lead the US more toward an approach where AI systems are being built that really, as a democracy, it should not be building — or should be more cautious about building — but [they] are being built out of this fear that a foreign state might do what we do not,” Sherman said.
But with AI being a large suite of technologies, and one that’s evolving incredibly quickly, there’s no way to know where the race actually ends.
As AI plays an increasing role in the military destinies of both countries, Sherman says, there’s a risk of “the US and China constantly trying to one-up each other in the latest and greatest, and the most lethal technology just becomes more and more dangerous over time.”
Emotional AI: More harm than good?
Generative AI mimics human-generated text, images, and video, and it's got huge implications for geopolitics, economics, and security. But that's not all - emotionally intelligent AI is on the rise.
And sometimes the results are ugly. Take the mental health nonprofit, KOKO, which used an AI chatbot to support counselors advising 4,000 people who were seeking counseling. The catch: The patients didn't know that a bot was generating the advice they were receiving. While users initially rated the bot-generated responses highly, the therapy lost its effectiveness once the patients were informed that they'd be talking to a fancy calculator.
The real question is: When does emotionally intelligent AI cross the line into emotionally manipulative territory?
This is not just a concern for virtual therapists -- politics could be impacted. And who knows, maybe even your favorite TV host will use generative AI to convince you to keep watching. Now there's an idea.
- The AI arms race begins: Scott Galloway’s optimism & warnings ›
- Ian Explains: The dark side of AI ›
- AI's search revolution: How ChatGPT will be your new search engine ›
- How robots will change the job market: Kai-Fu Lee predicts ›
- Is AI's "intelligence" an illusion? - GZERO Media ›
- Podcast: Getting to know generative AI with Gary Marcus - GZERO Media ›
- New AI toys spark privacy concerns for kids - GZERO Media ›
- AI & human rights: Bridging a huge divide - GZERO Media ›
How robots will change the job market: Kai-Fu Lee predicts
How will artificial intelligence change the world and especially the job market by 2041? AI scientist Kai-fu Lee just wrote a book about precisely that, and he predicts it'll shake up almost every major industry. AI, he explains, will be most disruptive to many so-called "routine" occupations, but the damage may be reduced by shifting "empathetic" workers to jobs that require human empathy. Watch his interview on GZERO World with Ian Bremmer.
Watch this episode of GZERO World with Ian Bremmer: Is a robot coming for your job? Kai-fu Lee explains AI
Ian Bremmer explains: Should we worry about AI?
Many of us learned about the dangers of artificial intelligence thanks to Stanley Kubrick. Today, AI is doing a lot to improve our lives, but the peril remains. Ian Bremmer expects it to help with many things, especially healthcare, yet also to displace a lot of low-skilled workers in the near future. What's more, brace for AI's impact on deepfakes, misinformation, autonomous weapons systems, and surveillance of ethnic minorities.
Watch this episode of GZERO World with Ian Bremmer: Is a robot coming for your job? Kai-fu Lee explains AI
- The problem with China’s Zero COVID strategy | GZERO World Podcast - GZERO Media ›
- Beware AI's negative impact on our world, warns former Google CEO Eric Schmidt - GZERO Media ›
- Ian Bremmer: Algorithms are now shaping human beings' behavior - GZERO Media ›
- New AI toys spark privacy concerns for kids - GZERO Media ›
- Can AI help doctors act more human? - GZERO Media ›
- AI's rapid rise - GZERO Media ›
Artificial intelligence from Ancient Greece to 2021
Did you know artificial intelligence was first conceptualized in Ancient Greece? That some of its early uses didn't work out? What did the first successful AI actually do? Today, even Alexa and Sophia are still no match for the human brain, but that'll likely change very soon. Join us for a trip down AI memory lane on the latest episode of GZERO World.
Watch this episode of GZERO World with Ian Bremmer: Is a robot coming for your job? Kai-fu Lee explains AI
Is a robot coming for your job? Kai-Fu Lee explains AI
Artificial intelligence is changing the way we live — and very soon it'll go beyond medical breakthroughs and the algorithms that control your social newsfeeds. Will AI become the biggest technological disrupter since the Industrial Revolution, replacing many workers with robots? In this week's show, Ian Bremmer discusses the future of AI with AI scientist Kai-fu Lee, who's just come out with a book about what our AI-driven world may look like 20 years from now.
El Salvador’s risky move to Bitcoin; future of Singapore patrol robots
Marietje Schaake, International Policy Director at Stanford's Cyber Policy Center, Eurasia Group senior advisor and former MEP, discusses trends in big tech, privacy protection and cyberspace:
El Salvador becomes the first country to adopt Bitcoin as legal tender. Is this a risky move?
Well, it is unclear who ought to benefit most of the President's move to adopt Bitcoin. Poor shopkeepers, wealthy investors, or he himself. With arguments that remittances are expensive and the future is digital, President Bukele leapt forward. But the immediate value drop of Bitcoin was a live reminder of the cryptocurrencies' volatility. One silver lining is that others can learn from the lessons that El Salvador will learn under this new spotlight.
Singapore starts trialing patrol robots to deter bad social behavior. Will robots be used for law enforcement soon?
Well, I certainly hope not because the further and further rollout of a surveillance state is often justified with arguments of safety, security, or convenience. But by adding robots, the intensification of policing public spaces is hidden behind a novelty of fun innovation. If anything, Singaporeans can use more personal space and freedom. And sure, that may mean that some people engage in "bad behavior," such as parking a car or a bike outside of the street lines, but as a Dutch person who has parked a few bikes in her life before, I can tell you that should not be a crime.
Tech initiatives to have machines, not people, cleaning sewers in India
Bandicoot, a machine with a camera-mounted extendable robotic arm, can descend into manholes and scoop out dirt.