Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
AI's existential risks: Why Yoshua Bengio is warning the world
In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, reflects on the growing excitement around artificial intelligence. At a recent AI conference he attended, Owen observes that while startups and officials emphasized AI's economic potential, prominent AI researcher Yoshua Bengio voiced serious concerns about its existential risks. Bengio, who's crucial to the development of the technology, stresses the importance of cautious public policy, warning that current AI research tends to prioritize power over safety.
A couple of weeks ago, I was at this big AI conference in Montreal called All In. It was all a bit over the top. There were smoke machines, loud music, and food trucks. It's clear that AI has come a long way from the quiet labs it was developed in. I'm still skeptical of some of the hype around AI, but there's just no question we're in a moment of great enthusiasm. There were dozens of startup founders there talking about how AI was going to transform this industry or that, and government officials promising that AI was going to supercharge our economy.
And then there was Yoshua Bengio. Bengio is widely considered one of the world's most influential computer scientists. In 2018, he and two colleagues won the Turing Award, the Nobel Prize of Computing for their work on deep learning, which forms the foundation of much of our current AI models. In 2022, he was the most cited computer scientist in the world. It's really safe to say that AI, as we currently know it, might not exist without Yoshua Bengio.
And I recently got the chance to talk to Bengio for my podcast, "Machines Like Us." And I wanted to find out what he thinks about AI now, about the current moment we're in, and I learned three really interesting things. First, Bengio's had an epiphany of sorts, as been widely talked about in the media. Bengio now believes that, left unchecked, AI has the potential to pose an existential threat to humanity. And so he's asking us, even if there's a small chance of this, why not proceed with tremendous caution?
Second, he actually thinks that the divide over this existential risk, which seems to exist in the scientific community, is being overplayed. Him and Meta's Yann LeCun, for example, who he won the Turing Prize with, differ on the timeframe of this risk and the ability of industry to contain it. But Bengio argues they agree on the possibility of it. And in his mind it's this possibility which actually should create clarity in our public policy. Without certainty over risk, he thinks the precautionary principle should lead, particularly when the risk is so potentially grave.
Third, and really interestingly, he's concerned about the incentives being prioritized in this moment of AI commercialization. This extends from executives like LeCun potentially downplaying risk and overstating industry's ability to contain it, right down to the academic research labs where a majority of the work is currently focused on making AI more powerful, not safer. This is a real warning that I think we need to heed. There's just no doubt that Yoshua Bengio's research contributed greatly to the current moment of AI we're in, but I sure hope his work on risk and safety shapes the next. I'm Taylor Owen and thanks for watching.
Avoiding extinction: A Q&A with Gladstone AI’s Jeremie Harris
In November 2022, the US Department of State commissioned a comprehensive report on the risks of artificial intelligence. The government turned to Gladstone AI, a four-person firm founded the year before to write such reports and brief government officials on matters concerning AI safety.
Gladstone AI interviewed more than 200 people working in and around AI about what risks keep them up at night. Their report, titled “Defense in Depth: An Action Plan to Increase the Safety and Security of Advanced AI,” released to the public on March 11.
The short version? It’s pretty dire: “The recent explosion of progress in advanced artificial intelligence has brought great opportunities, but it is also creating entirely new categories of weapons of mass destruction-like and WMD-enabling catastrophic risks.” Next to the words “catastrophic risks” is a particularly worrying footnote: “By catastrophic risks, we mean risks of catastrophic events up to and including events that would lead to human extinction.”
With all that in mind, GZERO spoke to Jeremie Harris, co-founder and CEO of Gladstone AI, about how this report came to be and how we should rewire our thinking about the risks posed by AI.
This interview has been edited for clarity and length.
GZERO: What is Gladstone and how did the opportunity to write this report come about?
Jeremie Harris: After GPT-3 came out in 2020, we assessed that the key principle behind it might be extensible enough that we should expect a radical acceleration in AI capabilities. Our views were shaped by our technical expertise in AI (we'd founded a now-acquired AI company in 2016), and by our conversations with friends at the frontier labs, including OpenAI itself.
By then, it was already clear that a ChatGPT moment was coming, and that the US government needed to be brought up to speed. We briefed a wide range of stakeholders, from cabinet secretaries to working-level action officers on the new AI landscape. A year before ChatGPT was released, we happened upon a team at the State Department that recognized the importance of AI scaling up with larger, more powerful models. They decided to commission an assessment of that risk set a month before ChatGPT launched, and we were awarded the contract.
You interviewed 200 experts. How did you determine who to talk to and who to take most seriously?
Harris: We knew who the field's key contributors were, and had spoken to many of them personally.
Our approach was to identify and engage all of the key pockets of informed opinion on these issues, from leadership to AI risk skeptics, to concerned researchers. We spoke to members of the executive, policy, safety, and capabilities teams at top labs. In addition, we held on-site engagements with researchers at top academic institutions in the US and U.K., as well as with AI auditing companies and civil society groups.
We also knew that we needed to account for the unique perspective of the US government's national security community, which has a long history of dealing with new emerging technologies and WMD-like risks. We held unprecedented workshops that brought together representatives and WMD experts from across the US interagency to discuss AI and its national security risks, and had them red-team our recommendations and analysis.
What do you want the average person to know about what you found?
Harris: AI has already helped us make amazing breakthroughs in fields like materials science and medicine. The technology’s promise is real. Unfortunately, the same capabilities that create that promise also create risks, and although we can't be certain, a significant and growing body of data does suggest that these risks could lead to WMD-scale effects if they're not properly managed. The question isn't how do we stop AI development, but rather, how can we implement common-sense safeguards that AI researchers themselves are often calling for, so that we can reap the immense benefits.
Our readership is (hopefully) more informed than the average person about AI. What should they take away from the report?
Harris: Top AI labs are currently locked in a race on the path to human-level AI, or AGI. This competitive dynamic erodes the margins that they otherwise might be investing in developing and implementing safety measures, at a time when we lack the technical means to ensure that AGI-level systems can be controlled or prevented from being weaponized. Compounding this challenge is the geopolitics of AI development, as other countries develop their own domestic AI programs.
This problem can be solved. The action plan lays out a way to stabilize the racing dynamics playing out at the frontier of the field; strengthen the US government's ability to detect and respond to AI incidents; and scale AI development safely domestically and internationally.
We suggest leveraging existing authorities, identifying requirements for new legal regimes when appropriate, and highlighting new technical options for AI governance that make domestic and international safeguards much easier to implement.
What is the most surprising—or alarming—thing you encountered in putting this report together?
Harris: From speaking to frontier researchers, it was clear that labs are under significant pressure to accelerate their work and build more powerful systems, and this increasingly involves hiring staff who are more interested in pushing forward capabilities as opposed to addressing risks. This has created a significant opportunity: many frontier lab executives and staff want to take a more balanced approach. As a result, the government has a window to introduce common-sense safeguards that would be welcomed not only by the public, but by important elements within frontier labs themselves.
Have anything to make us feel good about where things are headed?
Harris: Absolutely. If we can solve for the risk side of the equation, AI offers enormous promise. And there really are solutions to these problems. They require bold action, but that's not unprecedented: we've had to deal with catastrophic national security risks before, from biotechnology to nuclear weapons.
AI is a different kind of challenge, but it also comes with technical levers that can make it easier to secure and assure. On-chip governance protocols offer new ways to verify adherence to international treaties, and fine-grained software-enabled safeguards can allow for highly targeted regulatory measures that place the smallest possible burden on industry.