Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
AI's role in the Israel-Hamas war so far
Artificial intelligence is changing the world, and our new video series GZERO AI explores what it all means for you—from disinformation and regulation to the economic and political impact. Co-hosted by Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, and by Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence and former European Parliamentarian, this weekly video series will help you keep up and make sense of the latest news on the AI revolution.
In the first episode of the series, Taylor Owen takes a look at how artificial intelligence is shaping the war between Israel and Hamas.
As the situation in the Middle East just continues to escalate, today we're asking how is artificial intelligence shaping the war between Israel and Hamas? The short answer is that not as many expected it might. I think there's two cautions about the power of AI here, and one place where AI has been shown to really matter. The first caution is on the value of predictive AI. For years, many have been arguing that AI might not just help us understand the world as it is but might actually be able to help us predict future events. Nowhere has this been more the case than in the worlds of national security and policing.
Now, Gaza happens to be one of the most surveyed regions in the world. The use of drones, facial recognition, border checkpoints, and phone tapping have allowed the Israeli government to collect vast amounts of data about the Gazan population. Add to this, the fact that the director of the Israeli Defense Ministry has said that Israel is about to become an AI superpower, and one would think that the government might have had the ability to predict such events. But on October 7th, this was notably not the case. The government, the military, and Israeli citizens themselves were taken by surprise by this attack.
The reality, of course, is however powerful the AI might be, it is only as good as the data that's fed into it, and often if the data is biased or just plain wrong, so will the predictive capacity. So I think we need to be really cautious, particularly about the sales pitches being made by the companies selling these predictive tools to our policing and our national security services. The certainty which with they're doing so, I think, needs to be questioned.
The second caution I would add is on the role that AI plays in the creation of misinformation. Don't get me wrong, there's been a ton of it in this conflict, but it hasn't really been the synthetic media or the deep fakes that many feared would be a big problem in events like this. Instead, the misinformation has been low tech. It's been photos and videos from other events taken out of context and displayed as if they were from this one. It's been cheap fakes, not deep fakes. Now, there have been some cases even where AI deepfake detection tools, which have been rolled out in response to the problem of deep fakes, have actually falsely identified AI images as being created by AI. In this case, the threat of deep fakes is causing more havoc than the deep fakes themselves.
Finally, though, I think there is a place where AI is causing real harm in this conflict, and that is on social media. Our Twitter and our Facebook and our TikTok feeds are being shaped by artificially intelligent algorithms. And too often than not, these algorithms reinforce our biases and fuel our collective anger. The world seen through content that only makes us angry is just fundamentally a distorted one. And more broadly, I think calls for reigning in social media, whether by the companies themselves or through regulation, are being replaced with opaque and ill-defined notions of AI governance. And don't get me wrong, AI policy is important, but it is the social media ecosystem that is still causing real harm. We can't take our eye off of that policy ball.
I'm Taylor Owen, and thanks for watching.
- Is Israel ready for the nightmare waiting in Gaza? ›
- Lessons from Gaza: Think before you Tweet ›
- Be very scared of AI + social media in politics ›
- The AI power paradox: Rules for AI's power ›
- The OpenAI-Sam Altman drama: Why should you care? - GZERO Media ›
- Gemini AI controversy highlights AI racial bias challenge - GZERO Media ›
- Israel's Lavender: What could go wrong when AI is used in military operations? - GZERO Media ›
Use AI and data to predict and prevent crises - Melinda Bohannon
Data-driven humanitarian efforts are revolutionizing crisis response, says Melinda Bohannon, a prominent expert in international development. She highlights the significance of using data for better targeting and foreseeing global issues, , speaking in a Global Stage livestream event at UN headquarters in New York on September 22, on the sidelines of the UN General Assembly.
Bohannon notes, "In conflict and crises, we've used AI-driven models to track media and conflict events and human rights abuses and understand where conflicts are likely to break out. So we have that element of predictability in our policy and our program responses," underscoring the power of data to predict and preempt crises, enhancing humanitarian efforts significantly.
The discussion was moderated by Nicholas Thompson of The Atlantic and was held by GZERO Media in collaboration with the United Nations, the Complex Risk Analytics Fund, and the Early Warnings for All initiative.
Watch the full Global Stage conversation: Can data and AI save lives and make the world safer?
- Use new data to fight climate change & other challenges: UN tech envoy ›
- How AI can be used in public policy: Anne Witkowsky ›
- Scared of rogue AI? Keep humans in the loop, says Microsoft's Natasha Crampton ›
- AI plus existing technology: A recipe for tackling global crisis ›
- Can data and AI save lives and make the world safer? ›
AI plus existing technology: A recipe for tackling global crisis
When a country experiences a natural disaster, satellite technology and artificial intelligence can be used to rapidly gather data on the damage and initiate an effective response, according to Microsoft Vice Chair and President Brad Smith.
But to actually save lives “it's high-tech meets low-tech,” he said during a Global Stage livestream event at UN headquarters in New York on September 22, on the sidelines of the UN General Assembly.
He gave the example of SEEDS, an Indian NGO that dispatches local teens to distribute life-saving aid during heatwaves. He said the program emblemizes the effective combination of “artificial intelligence, technology, and people on the ground.”
The discussion was moderated by Nicholas Thompson of The Atlantic and was held by GZERO Media in collaboration with the United Nations, the Complex Risk Analytics Fund, and the Early Warnings for All initiative.
Watch the full Global Stage conversation: Can data and AI save lives and make the world safer?
- The urgent global water crisis ›
- Armenia faces Karabakh refugee crisis ›
- Is the global food crisis here to stay? ›
- Can data and AI save lives and make the world safer? ›
- The AI power paradox: Rules for AI's power ›
- A vision for inclusive AI governance ›
- An early warning system from the UN to avert global disasters - GZERO Media ›
- Use AI and data to predict and prevent crises - Melinda Bohannon - GZERO Media ›