Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
AI's existential risks: Why Yoshua Bengio is warning the world
In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, reflects on the growing excitement around artificial intelligence. At a recent AI conference he attended, Owen observes that while startups and officials emphasized AI's economic potential, prominent AI researcher Yoshua Bengio voiced serious concerns about its existential risks. Bengio, who's crucial to the development of the technology, stresses the importance of cautious public policy, warning that current AI research tends to prioritize power over safety.
A couple of weeks ago, I was at this big AI conference in Montreal called All In. It was all a bit over the top. There were smoke machines, loud music, and food trucks. It's clear that AI has come a long way from the quiet labs it was developed in. I'm still skeptical of some of the hype around AI, but there's just no question we're in a moment of great enthusiasm. There were dozens of startup founders there talking about how AI was going to transform this industry or that, and government officials promising that AI was going to supercharge our economy.
And then there was Yoshua Bengio. Bengio is widely considered one of the world's most influential computer scientists. In 2018, he and two colleagues won the Turing Award, the Nobel Prize of Computing for their work on deep learning, which forms the foundation of much of our current AI models. In 2022, he was the most cited computer scientist in the world. It's really safe to say that AI, as we currently know it, might not exist without Yoshua Bengio.
And I recently got the chance to talk to Bengio for my podcast, "Machines Like Us." And I wanted to find out what he thinks about AI now, about the current moment we're in, and I learned three really interesting things. First, Bengio's had an epiphany of sorts, as been widely talked about in the media. Bengio now believes that, left unchecked, AI has the potential to pose an existential threat to humanity. And so he's asking us, even if there's a small chance of this, why not proceed with tremendous caution?
Second, he actually thinks that the divide over this existential risk, which seems to exist in the scientific community, is being overplayed. Him and Meta's Yann LeCun, for example, who he won the Turing Prize with, differ on the timeframe of this risk and the ability of industry to contain it. But Bengio argues they agree on the possibility of it. And in his mind it's this possibility which actually should create clarity in our public policy. Without certainty over risk, he thinks the precautionary principle should lead, particularly when the risk is so potentially grave.
Third, and really interestingly, he's concerned about the incentives being prioritized in this moment of AI commercialization. This extends from executives like LeCun potentially downplaying risk and overstating industry's ability to contain it, right down to the academic research labs where a majority of the work is currently focused on making AI more powerful, not safer. This is a real warning that I think we need to heed. There's just no doubt that Yoshua Bengio's research contributed greatly to the current moment of AI we're in, but I sure hope his work on risk and safety shapes the next. I'm Taylor Owen and thanks for watching.
UN Secretary-General António Guterres on AI, Security Council reform, and global conflicts
UN Secretary-General António Guterres joins Ian Bremmer on the GZERO World Podcast for an exclusive conversation from the sidelines of the General Assembly at a critical moment for the world and the UN itself. Amid so many ongoing crises, is meaningful reform at the world’s largest multilateral institution possible? Between ongoing wars in Ukraine and Gaza, the climate crisis threatening the lives of millions, and a broken Security Council, there’s a lot to discuss. But there are some reasons for optimism. This year could bring the UN into a new era by addressing one of the biggest challenges facing our society: artificial intelligence and the growing digital divide. This year, the UN will hold its first-ever Summit of the Future, where members will vote on a Global Digital Compact, agreeing to shared principles for AI and digital governance. In a wide-ranging conversation, Guterres lays out his vision for the future of the UN and why he believes now is the time to reform our institutions to meet today’s political and economic realities.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
- Can the UN get the world to agree on AI safety? ›
- An interview with UN Secretary-General António Guterres ›
- 2023 UN General Assembly's top objective, according to António Guterres ›
- Peace in Ukraine is world's priority, says UN chief António Guterres ›
- UN’s first global framework for AI governance - GZERO Media ›
- The challenges of peacekeeping amid rising global conflicts - GZERO Media ›
Breaking: The UN unveils plan for AI
Overnight, and after months of deliberation, a United Nations advisory body studying artificial intelligence released its final report. Aptly called “Governing AI for Humanity,” it is a set of findings and policy recommendations for the international organization and an update since the group’s interim report in December 2023.
“As experts, we remain optimistic about the future of AI and its potential for good. That optimism depends, however, on realism about the risks and the inadequacy of structures and incentives currently in place,” the report’s authors wrote. “The technology is too important, and the stakes are too high, to rely only on market forces and a fragmented patchwork of national and multilateral action.”
Before we dive in, a quick humblebrag and editorial disclosure: Ian Bremmer, founder and president of both Eurasia Group and GZERO Media, served as a rapporteur for the UN High-Level Advisory Body on Artificial Intelligence, the group in charge of the report.
The HLAB-AI report asks the UN to begin working on a “globally inclusive” system for AI governance, calls on governments and stakeholders to develop AI in a way that protects human rights, and it makes seven recommendations. Let’s dive in to each:
- An international scientific panel on AI: A new group of volunteer experts would issue an annual report on AI risks and opportunities. They’d also contribute regular research on how AI could help achieve the UN’s Sustainable Development Goals, or SDGs.
- Policy dialogue on AI governance: A twice-yearly policy dialogue with governments and stakeholders on best practices for AI governance. It’d have an emphasis on “international interoperability” of AI governance.
- AI standards exchange: This effort would develop common definitions and standards for evaluating AI systems. It’d create a new process for identifying gaps in these definitions and how to write them, as well.
- Capacity development network: A network of new development centers that will provide researchers and social entrepreneurs with expertise, training data, and computing. It’d also develop online educational resources for university students and a fellowship program for individuals to spend time in academic institutions and tech companies.
- Global fund for AI: A new fund that would collect donations from public and private groups and disburse money to “put a floor under the AI divide,” focused on countries with fewer resources to fund AI.
- Global AI data framework: An initiative to set common standards and best practices governing AI training data and its provenance. It’d hold a repository of data sets and models to help achieve the SDGs.
- AI office within the Secretariat: This new office would see through the proposals in this report and advise the Secretary-General on all matters relating to AI.
The report’s authors conclude the report by remarking that if the UN is able to chart the right path forward, “we can look back in five years at an AI governance landscape that is inclusive and empowering for individuals, communities, and States everywhere.”
To learn more, Ian will host a UN panel conversation on Saturday, Sept. 21, which you can watch live here. And if you miss it, we’ll have a recap in our GZERO AI newsletter on Tuesday. You can also check out the full report here.
AI and war: Governments must widen safety dialogue to include military use
There's not a week without a new announcement of a new AI office, AI safety institute, or AI advisory body initiated by a government, usually the democratic governments of this world. They're all wrestling with, “How to regulate AI,” and seem to choose, without much variation, for a focus on safety.
Last week we saw the Department of Homeland Security in the US joining this line of efforts with its own advisory body. Lots of industry representatives, some from academia and civil society, to look at safety of AI in its own context. And what's remarkable amidst all this focus on safety is how little emphasis and even attention there is for restricting or putting guardrails around the use of AI in the context of militaries.
And that is remarkable because we can already see the harms of overreliance on AI, even if industry is really pushing this as its latest opportunity. Just look at venture capital poured into defense tech or “DefTech” as it's popularly called. And so, I think we should push for a widening of the lens when we talk about AI safety to include binding rules on military uses of AI. The harms are real. It's about life and death situations. Just imagine somebody being misidentified as a legitimate target for a drone strike, or the kinds of uses that we see in Ukraine where facial recognition tools, other kinds of data, crunching AI applications, are used in the battlefield without many rules around it, because the fog of war also makes it possible for companies to kind of jump into the void.
So it is important that safety of AI at least includes the focus and discussion on what is proper use of AI in the context of war, combat, and conflict, of which we see too much in today's world, and that there are rules in place initiated by democratic countries to make sure that the rules based order, international law, and human rights humanitarian law is upheld even in the context of the latest technologies like AI.
- Russia-Ukraine war: How we got here ›
- Robots are coming to a battlefield near you ›
- AI explosion, elections, and wars: What to expect in 2024 ›
- Biden & Xi set to agree on regulating military use of AI ›
- Ukraine’s AI battlefield ›
- Will AI further divide us or help build meaningful connections? - GZERO Media ›
- How neurotech could enhance our brains using AI - GZERO Media ›
AI policy formation must include voices from the global South
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she explains the need to incorporate diverse and inclusive perspectives in formulating policies and regulations for artificial intelligence. Narrowing the focus primarily to the three major policy blocs—China, the US, and Europe—would overlook crucial opportunities to address risks and concerns unique to the global South.
This is GZERO AI from Stanford's campus, where we just hosted a two-day conference on AI policy around the world. And when I say around the world, I mean truly around the world, including many voices from the Global South, from multilateral organizations like the OECD and the UN, and from the big leading AI policy blocs like the EU, the UK, the US and Japan that all have AI offices for oversight.
But what I really want to focus on is the role of people in the Global South, and how they're underrepresented in discussions about both what AI means in their local context and how they participate in debates around policy, if they do at all. Because right now, our focus is way too much on the three big policy blocks, China, the US and Europe.
Also because of course, a lot of industry is here around the corner in Silicon Valley. But I've learned so much from listening to people who focus on the African continent, where there are no less than 2000 languages. And, many questions about what AI will mean for those languages, for access for people beyond just the exploitative and attractive model, based on which large language models are trained with cheap labor from people in these developing countries, but also about how harms can be so different.
For example, the disinformation tends to spread with WhatsApp rather than social media platforms and that voice, through generative AI. So synthetic voice is one of the most effective ways to spread disinformation. Something that's not as prominently recognized here, where there's so much focus on text content and deepfakes videos, but not so much on audio. And then, of course, we talked about elections because there are a record number of people voting this year and disinformation around elections, tends to pick up.
And AI is really a wild card in that. So I take away that we just need to have many more conversations, not so much, about AI in the Global South and tech policy there, but listening to people who are living in those communities, researching the impact of AI in the Global South, or who are pushing for fair treatment when their governments are using the latest technologies for repression, for example.
So lots of fruitful thought. And, I was very grateful that people made it all the way over here to share their perspectives with us.
Exclusive: How to govern the unknown – a Q&A with MEP Eva Maydell
The European Parliament passed the Artificial Intelligence Act on March 13, becoming the world’s first major government to pass comprehensive regulations for the emerging technology. This capped a five-year effort to manage AI and its potential to disrupt every industry and cause geopolitical tensions.
The AI Act, which takes effect later this year, places basic transparency requirements on generative AI models such as OpenAI’s GPT-4, mandating that their makers share some information about how they are trained. There are more stringent rules for more powerful models or ones that will be used in sensitive sectors, such as law enforcement or critical infrastructure. Like with the EU’s data privacy law, there are steep penalties for companies that violate the new AI legislation – up to 7% of their annual global revenue.
GZERO spoke with Eva Maydell, a Bulgarian member of the European Parliament on the Committee on Industry, Research, and Energy, who negotiated the details of the AI Act. We asked her about the imprint Europe is leaving on global AI regulation.
GZERO: What drove you to spearhead work on AI in the European Parliament?
MEP Eva Maydell: It’s vital that we not only tackle the challenges and opportunities of today but those of tomorrow. That way, we can ensure that Europe is its most resilient and prepared. One of the most interesting and challenging aspects of being a politician that works on tech policy is trying to reach the right balance between enabling innovation and competitiveness with ensuring we have the right protections and safeguards in place. Artificial intelligence has the potential to change the world we live in, and having the opportunity to work on such an impactful piece of law was a privilege and a responsibility.
How do you think the AI Act balances regulation with innovation? Can Europe become a standard-setter for the AI industry while also encouraging development and progress within its borders?
Maydell: I fought very hard to ensure that innovation remained a strong feature of the AI Act. However, the proof of the pudding is in the eating. We must acknowledge that Europe has some catching up to do. AI take-up by European companies is 11%. Europeans rely on foreign countries for 80% of digital products and services. We also have to tackle inflation and stagnating growth. AI has the potential to be the engine for innovation, creativity, and prosperity, but only if we ensure that we keep working on all the other important pieces of the puzzle, such as a strong single market and greater access to capital.
The pace of AI is evolving rapidly. Does the AI Act set Europe up to be responsive to unforeseen advancements in technology?
Maydell: One of the most difficult aspects of regulating technology is trying to regulate the unknown. However, this is why it’s essential to stick to principles rather than over-prescription wherever possible - for example, a risk-based approach, and where possible aligning with international standards. This allows you the ability to adapt. It is also why the success of the AI Office and AI Forum will be so important. The guidance that we offer businesses and organizations in the coming months on how to implement the AI Act, will be key to its long-term success. Beyond the pages of the AI Act, we need to think about technological foresight. This is why I launched an initiative at the 60th annual Munich Security Conference – the “Council on the Future.” It aims to bridge the foresight and collaboration gap between the public and private sector with a view toward enabling the thoughtful stewardship of technology.
Europe is the first mover on AI regulation. How would you like to see the rest of the world follow suit and pass their own laws? How can Europe be an example to other countries?
Maydell: I hope we’re an example to others in the sense that we have tried to take a responsible approach to the development of AI. We are already seeing nations around the world take important steps towards shaping their own governance structures for AI. We have the Executive Order in the US and the UK had the AI Safety Summit. It is vital that like-minded nations are working together to ensure that there is broader coherence around the values associated with the development and use of our technologies. Deeper collaboration through the G7, the UN, and the OECD is something we must continue to pursue.
Is there anything the AI Act doesn't do that you'd like to turn your attention to next?
Maydell: The AI Act is not a silver bullet, but it is an important piece of a much bigger puzzle. We have adopted an unprecedented amount of digital legislation in the last five years. With these strong regulatory foundations in place, my hope is that we now focus on perhaps the less newsworthy but equally important issue of good implementation. This means cutting red tape, reducing existing excess bureaucracy, and removing any frictions or barriers between different EU laws in the digital space. The more clarity and certainty we can offer companies, the more likely it is that Europe will attract inward investment and be the birthplace of some of the biggest names in global tech.AI regulation means adapting old laws for new tech: Marietje Schaake
Why did Eurasia Group list "Ungoverned AI" as one of the top risks for 2024 in its annual report? Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, discussed the challenges around developing effective AI regulation, emphasizing that politicians and policymakers must recognize that not every challenge posed by AI and other emerging technologies will be novel; many merely require proactive approaches for resolution. She spoke during GZERO's Top Risks of 2024 livestream conversation, focused on Eurasia Group's report outlining the biggest global threats for the coming year.
"We didn't need AI to understand that discrimination is illegal. We didn't need AI to know that antitrust rules matter in a fair economy. We didn't need AI to know that governments have a key responsibility to safeguard national security," Schaake argues. "And so, those responsibilities have not changed. It's just that the way in which these poor democratic principles are at stake has changed."
For more:- Watch the full livestream discussion, moderated by GZERO's publisher Evan Solomon and featuring the authors of the report, Eurasia Group & GZERO President Ian Bremmer and Eurasia Group Chairman Cliff Kupchan.
- Read the full report on The Top Risks of 2024.
- And don't miss Marietje Schaake's updates as co-host of our video series GZERO AI.
- A world of conflict: The top risks of 2024 ›
- UK AI Safety Summit brings government leaders and AI experts together ›
- Rishi Sunak's first-ever UK AI Safety Summit: What to expect ›
- AI's impact on jobs could lead to global unrest, warns AI expert Marietje Schaake ›
- Singapore sets an example on AI governance ›
- AI and Canada's proposed Online Harms Act - GZERO Media ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity - GZERO Media ›
- Should we regulate generative AI with open or closed models? - GZERO Media ›
- Will AI further divide us or help build meaningful connections? - GZERO Media ›
- How is AI shaping culture in the art world? - GZERO Media ›
- AI's evolving role in society - GZERO Media ›
- UN’s first global framework for AI governance - GZERO Media ›
Grown-up AI conversations are finally happening, says expert Azeem Azhar
“The thing that’s surprised me most is how well CEOs are [now] articulating generative AI, this technology that’s only been public for a year or so,” Azhar says,” “I’ve never experienced that in my life and didn’t realize how quickly they’ve moved.”
Azhar and Bremmer also discuss the underlying technology that’s allowed generative AI tools like ChatGPT-4 to advance so quickly and where conversations about applications of artificial intelligence go from here. Whereas a year ago, experts were focused on the macro implications of existential risk, Azhar is excited this year to hear people focus on practical things like copyright and regulation—the small yet impactful things that move the economy and change how we live our lives.