Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
AI and war: Governments must widen safety dialogue to include military use
There's not a week without a new announcement of a new AI office, AI safety institute, or AI advisory body initiated by a government, usually the democratic governments of this world. They're all wrestling with, “How to regulate AI,” and seem to choose, without much variation, for a focus on safety.
Last week we saw the Department of Homeland Security in the US joining this line of efforts with its own advisory body. Lots of industry representatives, some from academia and civil society, to look at safety of AI in its own context. And what's remarkable amidst all this focus on safety is how little emphasis and even attention there is for restricting or putting guardrails around the use of AI in the context of militaries.
And that is remarkable because we can already see the harms of overreliance on AI, even if industry is really pushing this as its latest opportunity. Just look at venture capital poured into defense tech or “DefTech” as it's popularly called. And so, I think we should push for a widening of the lens when we talk about AI safety to include binding rules on military uses of AI. The harms are real. It's about life and death situations. Just imagine somebody being misidentified as a legitimate target for a drone strike, or the kinds of uses that we see in Ukraine where facial recognition tools, other kinds of data, crunching AI applications, are used in the battlefield without many rules around it, because the fog of war also makes it possible for companies to kind of jump into the void.
So it is important that safety of AI at least includes the focus and discussion on what is proper use of AI in the context of war, combat, and conflict, of which we see too much in today's world, and that there are rules in place initiated by democratic countries to make sure that the rules based order, international law, and human rights humanitarian law is upheld even in the context of the latest technologies like AI.
- Russia-Ukraine war: How we got here ›
- Robots are coming to a battlefield near you ›
- AI explosion, elections, and wars: What to expect in 2024 ›
- Biden & Xi set to agree on regulating military use of AI ›
- Ukraine’s AI battlefield ›
- Will AI further divide us or help build meaningful connections? - GZERO Media ›
- How neurotech could enhance our brains using AI - GZERO Media ›
AI policy formation must include voices from the global South
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she explains the need to incorporate diverse and inclusive perspectives in formulating policies and regulations for artificial intelligence. Narrowing the focus primarily to the three major policy blocs—China, the US, and Europe—would overlook crucial opportunities to address risks and concerns unique to the global South.
This is GZERO AI from Stanford's campus, where we just hosted a two-day conference on AI policy around the world. And when I say around the world, I mean truly around the world, including many voices from the Global South, from multilateral organizations like the OECD and the UN, and from the big leading AI policy blocs like the EU, the UK, the US and Japan that all have AI offices for oversight.
But what I really want to focus on is the role of people in the Global South, and how they're underrepresented in discussions about both what AI means in their local context and how they participate in debates around policy, if they do at all. Because right now, our focus is way too much on the three big policy blocks, China, the US and Europe.
Also because of course, a lot of industry is here around the corner in Silicon Valley. But I've learned so much from listening to people who focus on the African continent, where there are no less than 2000 languages. And, many questions about what AI will mean for those languages, for access for people beyond just the exploitative and attractive model, based on which large language models are trained with cheap labor from people in these developing countries, but also about how harms can be so different.
For example, the disinformation tends to spread with WhatsApp rather than social media platforms and that voice, through generative AI. So synthetic voice is one of the most effective ways to spread disinformation. Something that's not as prominently recognized here, where there's so much focus on text content and deepfakes videos, but not so much on audio. And then, of course, we talked about elections because there are a record number of people voting this year and disinformation around elections, tends to pick up.
And AI is really a wild card in that. So I take away that we just need to have many more conversations, not so much, about AI in the Global South and tech policy there, but listening to people who are living in those communities, researching the impact of AI in the Global South, or who are pushing for fair treatment when their governments are using the latest technologies for repression, for example.
So lots of fruitful thought. And, I was very grateful that people made it all the way over here to share their perspectives with us.
Singapore sets an example on AI governance
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she reviews the Singapore government's latest agenda in its AI policy: How to govern AI, at the Singapore Conference on Artificial Intelligence.
Hello. My name is Marietje Schaake. I'm in Singapore this week, and this is GZERO AI. Again, a lot of AI activities going on here at a conference organized by the Singaporese government that is looking at how to govern AI, the key question, million-dollar question, billion-dollar question that is on agendas for politicians, whether it is in cities, countries, or multilateral organizations. And what I like about the approach of the government here in Singapore is that they've brought together a group of experts from multiple disciplines, multiple countries around the world, to help them tackle the question of, what should we be asking ourselves? And how can experts inform what Singapore should do with regard to its AI policy? And this sort of listening mode and inviting experts first, I think is a great approach and hopefully more governments will do that, because I think it's necessary to have such well-informed thoughts, especially while there is so much going on already. Singapore is thinking very, very clearly and strategically about what its unique role can be in a world full of AI activities.
Speaking of the world full of AI activities, the EU will have the last, at least last planned negotiating round on the EU AI Act where the most difficult points will have to come to the table. Outstanding differences between Member States, the European parliaments around national security uses of AI, or the extent to which human rights protections will be covered, but also the critical discussion that is surfacing more and more around foundation models, whether they should be regulated, how they should be regulated, and how that can be done in a way that European companies are not disadvantaged compared to, for example, US leaders in the generative AI space in particular. So it's a pretty intense political fight, even after it looked like there was political consensus until about a month ago. But of course that is not unusual. Negotiations always have to tackle the most difficult points at the end, and that is where we are. So it's a space to watch, and I wouldn't be surprised if there would be an additional negotiating round planned after the one this week.
Then there will be the first physical meeting of the UN AI Advisory Body, of which I'm a member and I'm looking forward. This is going to happen in New York City and it will really be the first opportunity for all of us to get together and discuss, after online working sessions have taken place and a flurry of activities has already taken off after we were appointed roughly a month ago. So the UN is moving at break speed this time, and hopefully it will lead to important questions and answers with regard to the global governance of AI, the unique role of the United Nations, and the application of the charter international human rights and international law at this critical moment for global governance of artificial intelligence.
- Singapore politics get (!) interesting ›
- Is the EU's landmark AI bill doomed? ›
- EU AI regulation efforts hit a snag ›
- Regulate AI, but how? The US isn’t sure ›
- AI's impact on jobs could lead to global unrest, warns AI expert Marietje Schaake - GZERO Media ›
- AI regulation means adapting old laws for new tech: Marietje Schaake - GZERO Media ›
- AI & human rights: Bridging a huge divide - GZERO Media ›
Is the EU's landmark AI bill doomed?
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she talks about the potential pitfalls of the imminent EU AI Act and the sudden resistance that could jeopardize it altogether.
After a weekend full of drama around OpenAI, it is now time to shift to another potentially dramatic conclusion of an AI challenge, namely the EU AI Act, that's entering its final phase. And this week, the Member States of the EU will decide on their position. And there is sudden resistance coming from France and Germany in particular, to including foundation models in the EU AI Act. And I think that is a mistake. I think it is crucial for a safe but also competitive and democratically governed AI ecosystem that foundation models are actually part of the EU AI Act, which would be the most comprehensive AI law that the democratic world has put forward. So, the world is watching, and it is important that EU leaders understand that time is really of the essence if we look at the speed of development of artificial intelligence and in particular, generative AI.
And actually, that speed of development is what's kind of catching up now with the negotiators, because in the initial phase, the European Commission had designed the law to be risk-based when we look at the outcomes of AI applications. So, if AI is used to decide on whether to hire someone or give them access to education or social benefits, the consequences for the individual impacted can be significant and so, proportionate to the risk, mitigating measures should be in place. And the law was designed to include anything from very low or no-risk applications to high and unacceptable risk of applications, such as a social credit scoring system as unacceptable, for example. But then when generative AI products started flooding the market, the European Parliament, which was taking its position, decided, “We need to look at the technology as well. We cannot just look at the outcomes.” And I think that that is critical because foundation models are so fundamental. Really, they form the basis of so much downstream use that if there are problems at that initial stage, they ripple through like an earthquake in many, many applications. And if you don't want startups or downstream users to be confronted with liability or very high compliance costs, then it's also important to start at the roots and make sure that sort of the core ingredients of the uses of these AI models are properly governed and that they are safe and okay to use.
So, when I look ahead at December, when the European Commission, the European Parliament and Member States come together, I hope negotiators will look at the way in which foundation models can be regulated, that it is not a yes or no to regulation, but it's a progressive work tiered approach that really attaches the strongest mitigating or scrutiny measures to the most powerful players. The way that has been done in many other sectors. It would be very appropriate for AI foundation models, as well. There's a lot of debate going on. Open letters are being penned, op-ed experts are speaking out, and I'm sure there is a lot of heated debate between Member States of the European Union. I just hope that the negotiators appreciate that the world is watching. Many people with great hope as to how the EU can once again regulate on the basis of its core values, and that with what we now know about how generative AI is built upon these foundation models, it would be a mistake to overlook them in the most comprehensive EU AI law.
- Regulate AI, but how? The US isn’t sure ›
- The Graphic Truth: The EU from its origins until now ›
- So You Want to Prevent a Dystopia? ›
- Ai Act - GZERO Media ›
- Rishi Sunak's first-ever UK AI Safety Summit: What to expect ›
- Singapore sets an example on AI governance - GZERO Media ›
- AI's impact on jobs could lead to global unrest, warns AI expert Marietje Schaake - GZERO Media ›