Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Exclusive: How to govern the unknown – a Q&A with MEP Eva Maydell
The European Parliament passed the Artificial Intelligence Act on March 13, becoming the world’s first major government to pass comprehensive regulations for the emerging technology. This capped a five-year effort to manage AI and its potential to disrupt every industry and cause geopolitical tensions.
The AI Act, which takes effect later this year, places basic transparency requirements on generative AI models such as OpenAI’s GPT-4, mandating that their makers share some information about how they are trained. There are more stringent rules for more powerful models or ones that will be used in sensitive sectors, such as law enforcement or critical infrastructure. Like with the EU’s data privacy law, there are steep penalties for companies that violate the new AI legislation – up to 7% of their annual global revenue.
GZERO spoke with Eva Maydell, a Bulgarian member of the European Parliament on the Committee on Industry, Research, and Energy, who negotiated the details of the AI Act. We asked her about the imprint Europe is leaving on global AI regulation.
GZERO: What drove you to spearhead work on AI in the European Parliament?
MEP Eva Maydell: It’s vital that we not only tackle the challenges and opportunities of today but those of tomorrow. That way, we can ensure that Europe is its most resilient and prepared. One of the most interesting and challenging aspects of being a politician that works on tech policy is trying to reach the right balance between enabling innovation and competitiveness with ensuring we have the right protections and safeguards in place. Artificial intelligence has the potential to change the world we live in, and having the opportunity to work on such an impactful piece of law was a privilege and a responsibility.
How do you think the AI Act balances regulation with innovation? Can Europe become a standard-setter for the AI industry while also encouraging development and progress within its borders?
Maydell: I fought very hard to ensure that innovation remained a strong feature of the AI Act. However, the proof of the pudding is in the eating. We must acknowledge that Europe has some catching up to do. AI take-up by European companies is 11%. Europeans rely on foreign countries for 80% of digital products and services. We also have to tackle inflation and stagnating growth. AI has the potential to be the engine for innovation, creativity, and prosperity, but only if we ensure that we keep working on all the other important pieces of the puzzle, such as a strong single market and greater access to capital.
The pace of AI is evolving rapidly. Does the AI Act set Europe up to be responsive to unforeseen advancements in technology?
Maydell: One of the most difficult aspects of regulating technology is trying to regulate the unknown. However, this is why it’s essential to stick to principles rather than over-prescription wherever possible - for example, a risk-based approach, and where possible aligning with international standards. This allows you the ability to adapt. It is also why the success of the AI Office and AI Forum will be so important. The guidance that we offer businesses and organizations in the coming months on how to implement the AI Act, will be key to its long-term success. Beyond the pages of the AI Act, we need to think about technological foresight. This is why I launched an initiative at the 60th annual Munich Security Conference – the “Council on the Future.” It aims to bridge the foresight and collaboration gap between the public and private sector with a view toward enabling the thoughtful stewardship of technology.
Europe is the first mover on AI regulation. How would you like to see the rest of the world follow suit and pass their own laws? How can Europe be an example to other countries?
Maydell: I hope we’re an example to others in the sense that we have tried to take a responsible approach to the development of AI. We are already seeing nations around the world take important steps towards shaping their own governance structures for AI. We have the Executive Order in the US and the UK had the AI Safety Summit. It is vital that like-minded nations are working together to ensure that there is broader coherence around the values associated with the development and use of our technologies. Deeper collaboration through the G7, the UN, and the OECD is something we must continue to pursue.
Is there anything the AI Act doesn't do that you'd like to turn your attention to next?
Maydell: The AI Act is not a silver bullet, but it is an important piece of a much bigger puzzle. We have adopted an unprecedented amount of digital legislation in the last five years. With these strong regulatory foundations in place, my hope is that we now focus on perhaps the less newsworthy but equally important issue of good implementation. This means cutting red tape, reducing existing excess bureaucracy, and removing any frictions or barriers between different EU laws in the digital space. The more clarity and certainty we can offer companies, the more likely it is that Europe will attract inward investment and be the birthplace of some of the biggest names in global tech.AI in 2024: Will democracy be disrupted?
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she shares her reflection on AI in 2023.
Hello, this is GZERO AI. My name is Marietje Schaake. It's the end of the year, and so it's the time for lists. As we see so many top fives, top threes, top tens of the key developments in AI, I thought I would just share a couple of reflections. Not list them, just look back on this year, which was remarkable in so many ways.
We saw a huge explosion of discussion around AI governance. Are companies, the ones that can take on all this responsibility of assessing risk, or deciding when to push new research onto the market, or as illustrated by the dramatic saga at OpenAI, are companies not in a good position to make all these decisions themselves and to sort of design checks and balances all in-house? Governments agree. I don't think they want to let these decisions to the big companies, and so they are really stepping up across the board and across the globe. We've only recently, in the last days of this year, seen the political agreement around the EU AI Act, a landmark law that will really set a standard in the democratic world for governing AI in a binding fashion. But there were also a lot of voluntary code of conduct, as we saw at the G7, statements that came out of the AI Safety Summit like the Bletchley Park Declaration, and there was the White House's executive order to add to the many initiatives that were taken in an attempt to make sure that AI developments at least respect the laws that are on the book, if not make new ones where needed.
Now, what I thought was missing quite a bit, looking at the AI Safety Summit, for example, but also in discussions in my home country, the Netherlands, there were elections where AI did not feature at all in the political debate. Is a better discussion, more informed, and more anticipatory about job displacement? I think it is potentially a most devastating and most disruptive development, and yet we don't really hear so much about it short of reports by consulting firms that predict macroeconomic benefits over the long run. But if you look at the political fallout of job displacement and the need to have resources, for example, to reskill and retrain people. There is a need for a much more public debate and maybe even to start talking about the T-word, namely taxing AI companies.
What I also think is missing still, despite having had more reference to the Global South, is true engagement of people from all over the world, not just from the most advanced economies, but really, to have a global engagement with people to understand their lived experiences and needs with regard to the rollout of AI. Because even if people do not have agency over what AI decides about them, there will still be impact even if people are not even online yet. So I think it is incredibly important to have a more global, inclusive, and equal discussion with people from all over the world, and that will be something I'll be looking out for the year 2024.
What I also think is missing still, despite having had more reference to the Global South, is true engagement of people from all over the world, not just from the most advanced economies, but really, to have a global engagement with people to understand their lived experiences and needs with regard to the rollout of AI. Because even if people do not have agency over what AI decides about them, there will still be impact even if people are not even online yet. So I think it is incredibly important to have a more global, inclusive, and equal discussion with people from all over the world, and that will be something I'll be looking out for the year 2024.
And then last, and certainly not least, 2024 has been called the Year of Democracy. I hope we will say the same when we look back a year from now. There will be an unprecedented amount of people going to the polls, and there are still a lot of question marks about how disruptive AI is going to be for the public debate, the political debate, new means of manipulating, sharing disinformation with synthetic media that is really, really hard to distinguish from authentic human-uttered expressions. Really, the combination of AI and elections, AI and democracy deserves a lot more attention and will probably draw attention in the year where billions of people will take to the polls, 2024.
For now, let me wish you a happy holiday season with friends and few screens, I hope. And we will see each other again afresh in the new year. Happy New Year and happy holidays.
- AI explosion, elections, and wars: What to expect in 2024 ›
- The world of AI in 2024 ›
- ChatGPT and the 2024 US election ›
- How AI threatens elections ›
- AI, election integrity, and authoritarianism: Insights from Maria Ressa ›
- How to protect elections in the age of AI - GZERO Media ›
- When AI makes mistakes, who can be held responsible? - GZERO Media ›
- AI's potential to impact election is cause for concern - EU's Eva Maydell - GZERO Media ›
The world of AI in 2024
2. Labor tensions: The acceleration of AI will continue to reshape industries, automating jobs and displacing workers. That will lead to widespread tension in various sectors of the economy. Union leaders could make AI the centerpiece of their strikes, and you might hear a lot of talk about “reskilling” workers on the lips of lawmakers heading into the 2024 election. This time it’s sure to work …
3. Copyright clarity: We don’t really know how AI models are trained, but we know they’re at least partially trained on unlicensed copyrighted material. Clarity is coming in Europe: The forthcoming AI Act mandates some transparency about training data. But in the US, where regulation is sparse, the courts are considering a big legal question about whether using copyrighted material as training data violates the law. At issue is whether the output is “transformative enough.” The answer to this legal question has extremely high stakes. Look for authors and artists to keep suing. But also look for companies, under pressure from lawmakers, to start opening up about how their systems are trained, whether copyrighted material is used, and why they think the stuff their models spit out does not constitute copyright infringement. We at GZERO aren’t holding our breath for writers' royalties (but we’d sure take ’em).
4. A big new law in Europe: The European Union’s AI Act is set to become law in the spring of 2024. Of course, lawmakers could falter before hitting the finish line, but an agreement this month made that unlikely. What’s ahead: The EU just held the first of 11 sessions to hammer out the details of the law, which will lead to a “four-column document” by February, reconciling proposals from the three EU legislative bodies. Only after that will country representatives vote to finalize the act. But this landmark law won’t have teeth in 2024 even if everything goes to plan because there’s a 12-month grace period for companies to comply. It’s all hurry up and wait.
5. The hype cycle continues: Major investment in AI won’t be a flash in the pan for 2023. With hints of lower interest rates, and still-palpable interest in AI from tech investors hungry for massive returns, expect the billion-dollar valuations, IPOs, mergers and acquisitions, and the big-moneyed investment from top tech firms in startups all to accelerate.
6. Congress does something: The US Congress does more bickering than lawmaking today. But there’s real political will to not get left behind on AI regulation. Lawmakers have been regularly discussing AI, grilling its corporate leaders, and brainstorming ideas for governance. They’ve proposed removing red tape for chipmakers, mandating disclosures for AI-generated political ads, and even considered a “light-touch” law-making AI developers self-certify for safety. It’s not necessarily likely that the US will pass something sprawling like the EU’s AI Act, but Congress will likely pass something about AI in the coming year. More than 50 different AI-related bills have been introduced since the 118th Congress began last year, but none have passed through either house of Congress.
7. Antitrust comes for AI: Regulators are circling. The US government sued Google for allegedly abusing its monopolies in search and advertising technology, Amazon for hurting competition on its e-commerce platform, and Meta for buying dominant market power through its Instagram and WhatsApp acquisitions. That’s the hallmark of current FTC Chair Lina Khan and Justice Department antitrust chief Jonathan Kanter, who have been set on enforcing antitrust law against Big Tech. And that fervor is likely to hit AI in 2024. There’s lots of political will to use antitrust law in the UK and Europe, which means scrutiny will soon come to AI. In fact, it’s already here. The FTC and the UK’s Competition and Markets Authority are reportedly probing Microsoft’s investment into OpenAI – it’s not a full-fledged investigation yet, but in 2024 antitrust regulators will be watching AI very closely.
8. Election problems: In 2024, an unprecedented number of countries – some 40-plus – will head to the polls, and many will have their eyes on places like the United States and India for the use of AI in disinformation campaigns ahead of Election Day. There is concern about deepfake technology fueling confusion or contributing to an already-challenging misinformation problem. We’ve already seen deepfake songs impersonating Indian Prime Minister Narendra Modi and videos portraying US President Joe Biden. But what we haven’t seen yet is AI disrupting an election. Will 2024 be the year that AI-generated words, videos, images, and music play a surprising role in elections?
9. New companies you’ve never heard of. By the end of 2024, the top companies in AI may be the same as today: Anthropic, Google, Meta, Microsoft, and OpenAI. But chances are there will be a startup that you've never heard of on the list. Why? Not only is innovation an everyday reality in AI, but investors are excited to fund these projects to reap potential rewards. In the first half of 2023, AI's share of total startup funding in the US more than doubled from 11% to 26% compared to the same period in 2022. That includes household names and challengers you might have already heard of, such as OpenAI ($29 billion) and Anthropic ($5 billion), which had big funding rounds this year. But there are 15 new AI "unicorns" (billion-dollar companies) that could break into the mainstream, including the enterprise AI firm Cohere ($2.2 billion) and the research lab Imbue ($1 billion). Even in a high-interest rate environment, AI startups have fetched big valuations despite still-paltry revenue estimates — at a time when “easy money” has vanished from the broader tech sector. Expecting stasis would be foolish.
10. The real reason Sam Altman was fired: Expect to learn why OpenAI really fired Sam Altman in 2024. It’s perhaps the great mystery in AI, but it can’t remain a secret forever. If anyone knows the answer, please let us know.
EU lawmakers make AI history
It took two years — long enough to earn a Master's degree — but Europe’s landmark AI Act is finally nearing completion. Debates raged last week, but EU lawmakers on Friday reached a provisional agreement on the scope of Europe’s effort to rein in artificial intelligence.
The new rules will follow a two-tiered approach. They will require transparency from general-purpose AI models and impose more stringent safety measures on riskier ones. Generative AI models like OpenAI’s GPT-4 would fall into the former camp and be required to disclose basic information about how the models are trained. But folks in Brussels have also seen "The Terminator," so models deemed a higher risk will have to submit to regular safety tests, disclose any risks, take stringent cybersecurity precautions, and report their energy consumption.
According to Thierry Breton, the EU’s industrial affairs chief, Europe just set itself up as "a pioneer" and "global standard-setter," noting that the act will be a launchpad for EU startups and researchers, granting the bloc a “first-mover advantage” in shaping global AI policy.
Mia Hoffmann, a research fellow at Georgetown University’s Center for Security and Emerging Technology, believes the AI Act will “become something of a global regulatory benchmark” similar to GDPR.
Recent sticking points have been over the regulation of large language models, but EU member governments plan to finalize the language in the coming months. Hoffmann says that while she expects it to be adopted soon, “with the speed of innovation, the AI Act's formal adoption in the spring of 2024 can seem ages away.”