Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she talks about the potential pitfalls of the imminent EU AI Act and the sudden resistance that could jeopardize it altogether.
After a weekend full of drama around OpenAI, it is now time to shift to another potentially dramatic conclusion of an AI challenge, namely the EU AI Act, that's entering its final phase. And this week, the Member States of the EU will decide on their position. And there is sudden resistance coming from France and Germany in particular, to including foundation models in the EU AI Act. And I think that is a mistake. I think it is crucial for a safe but also competitive and democratically governed AI ecosystem that foundation models are actually part of the EU AI Act, which would be the most comprehensive AI law that the democratic world has put forward. So, the world is watching, and it is important that EU leaders understand that time is really of the essence if we look at the speed of development of artificial intelligence and in particular, generative AI.
And actually, that speed of development is what's kind of catching up now with the negotiators, because in the initial phase, the European Commission had designed the law to be risk-based when we look at the outcomes of AI applications. So, if AI is used to decide on whether to hire someone or give them access to education or social benefits, the consequences for the individual impacted can be significant and so, proportionate to the risk, mitigating measures should be in place. And the law was designed to include anything from very low or no-risk applications to high and unacceptable risk of applications, such as a social credit scoring system as unacceptable, for example. But then when generative AI products started flooding the market, the European Parliament, which was taking its position, decided, “We need to look at the technology as well. We cannot just look at the outcomes.” And I think that that is critical because foundation models are so fundamental. Really, they form the basis of so much downstream use that if there are problems at that initial stage, they ripple through like an earthquake in many, many applications. And if you don't want startups or downstream users to be confronted with liability or very high compliance costs, then it's also important to start at the roots and make sure that sort of the core ingredients of the uses of these AI models are properly governed and that they are safe and okay to use.
So, when I look ahead at December, when the European Commission, the European Parliament and Member States come together, I hope negotiators will look at the way in which foundation models can be regulated, that it is not a yes or no to regulation, but it's a progressive work tiered approach that really attaches the strongest mitigating or scrutiny measures to the most powerful players. The way that has been done in many other sectors. It would be very appropriate for AI foundation models, as well. There's a lot of debate going on. Open letters are being penned, op-ed experts are speaking out, and I'm sure there is a lot of heated debate between Member States of the European Union. I just hope that the negotiators appreciate that the world is watching. Many people with great hope as to how the EU can once again regulate on the basis of its core values, and that with what we now know about how generative AI is built upon these foundation models, it would be a mistake to overlook them in the most comprehensive EU AI law.
- Regulate AI, but how? The US isn’t sure ›
- The Graphic Truth: The EU from its origins until now ›
- So You Want to Prevent a Dystopia? ›
- Ai Act - GZERO Media ›
- Rishi Sunak's first-ever UK AI Safety Summit: What to expect ›
- Singapore sets an example on AI governance - GZERO Media ›
- AI's impact on jobs could lead to global unrest, warns AI expert Marietje Schaake - GZERO Media ›