AI and war: Governments must widen safety dialogue to include military use

AI and war: Governments must widen safety dialogue to include military use | GZERO AI
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, Marietje insists that governments must prioritize establishing guardrails for the deployment of artificial intelligence in military operations. Already, there are ongoing endeavors ensuring that AI is safe to use but, according to her, there's an urgent need to widen that discussion to include its use in warfare—an area where lives are at stake.

There's not a week without a new announcement of a new AI office, AI safety institute, or AI advisory body initiated by a government, usually the democratic governments of this world. They're all wrestling with, “How to regulate AI,” and seem to choose, without much variation, for a focus on safety.

Last week we saw the Department of Homeland Security in the US joining this line of efforts with its own advisory body. Lots of industry representatives, some from academia and civil society, to look at safety of AI in its own context. And what's remarkable amidst all this focus on safety is how little emphasis and even attention there is for restricting or putting guardrails around the use of AI in the context of militaries.

And that is remarkable because we can already see the harms of overreliance on AI, even if industry is really pushing this as its latest opportunity. Just look at venture capital poured into defense tech or “DefTech” as it's popularly called. And so, I think we should push for a widening of the lens when we talk about AI safety to include binding rules on military uses of AI. The harms are real. It's about life and death situations. Just imagine somebody being misidentified as a legitimate target for a drone strike, or the kinds of uses that we see in Ukraine where facial recognition tools, other kinds of data, crunching AI applications, are used in the battlefield without many rules around it, because the fog of war also makes it possible for companies to kind of jump into the void.

So it is important that safety of AI at least includes the focus and discussion on what is proper use of AI in the context of war, combat, and conflict, of which we see too much in today's world, and that there are rules in place initiated by democratic countries to make sure that the rules based order, international law, and human rights humanitarian law is upheld even in the context of the latest technologies like AI.

More from GZERO Media

President Joe Biden meets with Ukraine's President Volodymyr Zelenskiy at the White House in Washington, U.S., Sept. 26, 2024.
REUTERS/Elizabeth Frantz

Now that the election is over — and Donald Trump is president-elect — President Joe Biden no longer has to worry whether his decisions will hurt Kamala Harris’ chances of winning.

Get a downloadable map of the 2024 presidential election race now that the results are in. You can still download our map to count the number of electoral votes earned by each candidate as the final few states are called.

- YouTube

Ian Bremmer's Quick Take: After Trump's election win, "everything geopolitical is going to be much more uncertain and volatile in the coming months," says Ian Bremmer. Watch his new QuickTake on what the election outcome means for America and the world.

Republican presidential nominee in West Palm Beach, Florida, U.S., November 6, 2024.
REUTERS/Brian Snyder

Donald Trump won the presidential election in an apparent landslide on Tuesday night, with a realigned GOP coalition that, according to early exit polls, successfully drew young, male, and minority voters.