Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Israel’s lethal AI
The Israeli military is using artificial intelligence to determine bombing targets with cursory oversight from humans, according to reports from The Guardian and +972 Magazine last week.
The reports cite anonymous Israeli intelligence officials, who say an AI program called Lavender is trained to identify Hamas and Palestinian Islamic Jihad militants as potential bombing targets. The government has reportedly given Israel Defense Forces officers approval to take out anyone identified as a target by Lavender. The tool has been used to order strikes on “thousands” of Palestinian targets — even though Lavender is known to have a 10% error rate. According to the Guardian, 37,000 potential targets were identified by the program.
The IDF did not deny the existence of Lavender in a statement to CNN but said AI was not being used to identify terrorists.
Militaries around the world are building up their AI capacities, and we’ve already seen the technology play a major role on both sides of the war between Russia and Ukraine. Israel’s bloody campaign in Gaza has led to widespread allegations of indifference toward mass civilian casualties, underlined by the bombing of a World Central Kitchen caravan last week. Artificial intelligence threatens to play a greater role in determining who lives and who dies at war — will it also make conflicts more inhumane?