Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
The military jet that acts alone
The US Air Force and the Defense Advanced Research Projects Agency, aka DARPA, have been tinkering with the latest aerial weapons. On April 17, DARPA confirmed that in military exercises with the Air Force last year, an AI-controlled jet was pitted against a human pilot in an in-air dogfight simulation.
The Air Force installed its autonomous pilot system in a modified F-16 relabelled as the X-62A back in 2021. Humans were aboard the autonomous aircraft during the dogfight experiment, with the ability to take control if necessary. The military didn’t specify whether the autonomous X-62A or the human-piloted opponent, an F-16 jet, “won” the duel, which took place in September 2023, though it did say the test was a success.
“The potential for autonomous air-to-air combat has been imaginable for decades, but the reality has remained a distant dream up until now,” Air Force Secretary Frank Kendall wrote in a statement. “This is a transformational moment.”
As we’ve written previously, militaries around the world are gearing up for autonomous warfare, with weapons systems able to identify and take out specific targets. The United Nations has meanwhile called the use of autonomous weapons on human targets a “moral line that we must not cross,” a signal that there will be a drumbeat of public criticism as the US and other militaries expand and deploy their AI-powered weapons.Israel’s lethal AI
The Israeli military is using artificial intelligence to determine bombing targets with cursory oversight from humans, according to reports from The Guardian and +972 Magazine last week.
The reports cite anonymous Israeli intelligence officials, who say an AI program called Lavender is trained to identify Hamas and Palestinian Islamic Jihad militants as potential bombing targets. The government has reportedly given Israel Defense Forces officers approval to take out anyone identified as a target by Lavender. The tool has been used to order strikes on “thousands” of Palestinian targets — even though Lavender is known to have a 10% error rate. According to the Guardian, 37,000 potential targets were identified by the program.
The IDF did not deny the existence of Lavender in a statement to CNN but said AI was not being used to identify terrorists.
Militaries around the world are building up their AI capacities, and we’ve already seen the technology play a major role on both sides of the war between Russia and Ukraine. Israel’s bloody campaign in Gaza has led to widespread allegations of indifference toward mass civilian casualties, underlined by the bombing of a World Central Kitchen caravan last week. Artificial intelligence threatens to play a greater role in determining who lives and who dies at war — will it also make conflicts more inhumane?