Avoiding extinction: A Q&A with Gladstone AI’s Jeremie Harris

Midjourney

In November 2022, the US Department of State commissioned a comprehensive report on the risks of artificial intelligence. The government turned to Gladstone AI, a four-person firm founded the year before to write such reports and brief government officials on matters concerning AI safety.

Gladstone AI interviewed more than 200 people working in and around AI about what risks keep them up at night. Their report, titled “Defense in Depth: An Action Plan to Increase the Safety and Security of Advanced AI,” released to the public on March 11.

The short version? It’s pretty dire: “The recent explosion of progress in advanced artificial intelligence has brought great opportunities, but it is also creating entirely new categories of weapons of mass destruction-like and WMD-enabling catastrophic risks.” Next to the words “catastrophic risks” is a particularly worrying footnote: “By catastrophic risks, we mean risks of catastrophic events up to and including events that would lead to human extinction.”

With all that in mind, GZERO spoke to Jeremie Harris, co-founder and CEO of Gladstone AI, about how this report came to be and how we should rewire our thinking about the risks posed by AI.

This interview has been edited for clarity and length.

GZERO: What is Gladstone and how did the opportunity to write this report come about?

Jeremie Harris: After GPT-3 came out in 2020, we assessed that the key principle behind it might be extensible enough that we should expect a radical acceleration in AI capabilities. Our views were shaped by our technical expertise in AI (we'd founded a now-acquired AI company in 2016), and by our conversations with friends at the frontier labs, including OpenAI itself.

By then, it was already clear that a ChatGPT moment was coming, and that the US government needed to be brought up to speed. We briefed a wide range of stakeholders, from cabinet secretaries to working-level action officers on the new AI landscape. A year before ChatGPT was released, we happened upon a team at the State Department that recognized the importance of AI scaling up with larger, more powerful models. They decided to commission an assessment of that risk set a month before ChatGPT launched, and we were awarded the contract.

You interviewed 200 experts. How did you determine who to talk to and who to take most seriously?

Harris: We knew who the field's key contributors were, and had spoken to many of them personally.

Our approach was to identify and engage all of the key pockets of informed opinion on these issues, from leadership to AI risk skeptics, to concerned researchers. We spoke to members of the executive, policy, safety, and capabilities teams at top labs. In addition, we held on-site engagements with researchers at top academic institutions in the US and U.K., as well as with AI auditing companies and civil society groups.

We also knew that we needed to account for the unique perspective of the US government's national security community, which has a long history of dealing with new emerging technologies and WMD-like risks. We held unprecedented workshops that brought together representatives and WMD experts from across the US interagency to discuss AI and its national security risks, and had them red-team our recommendations and analysis.

What do you want the average person to know about what you found?

Harris: AI has already helped us make amazing breakthroughs in fields like materials science and medicine. The technology’s promise is real. Unfortunately, the same capabilities that create that promise also create risks, and although we can't be certain, a significant and growing body of data does suggest that these risks could lead to WMD-scale effects if they're not properly managed. The question isn't how do we stop AI development, but rather, how can we implement common-sense safeguards that AI researchers themselves are often calling for, so that we can reap the immense benefits.

Our readership is (hopefully) more informed than the average person about AI. What should they take away from the report?

Harris: Top AI labs are currently locked in a race on the path to human-level AI, or AGI. This competitive dynamic erodes the margins that they otherwise might be investing in developing and implementing safety measures, at a time when we lack the technical means to ensure that AGI-level systems can be controlled or prevented from being weaponized. Compounding this challenge is the geopolitics of AI development, as other countries develop their own domestic AI programs.

This problem can be solved. The action plan lays out a way to stabilize the racing dynamics playing out at the frontier of the field; strengthen the US government's ability to detect and respond to AI incidents; and scale AI development safely domestically and internationally.

We suggest leveraging existing authorities, identifying requirements for new legal regimes when appropriate, and highlighting new technical options for AI governance that make domestic and international safeguards much easier to implement.

What is the most surprising—or alarming—thing you encountered in putting this report together?

Harris: From speaking to frontier researchers, it was clear that labs are under significant pressure to accelerate their work and build more powerful systems, and this increasingly involves hiring staff who are more interested in pushing forward capabilities as opposed to addressing risks. This has created a significant opportunity: many frontier lab executives and staff want to take a more balanced approach. As a result, the government has a window to introduce common-sense safeguards that would be welcomed not only by the public, but by important elements within frontier labs themselves.

Have anything to make us feel good about where things are headed?

Harris: Absolutely. If we can solve for the risk side of the equation, AI offers enormous promise. And there really are solutions to these problems. They require bold action, but that's not unprecedented: we've had to deal with catastrophic national security risks before, from biotechnology to nuclear weapons.

AI is a different kind of challenge, but it also comes with technical levers that can make it easier to secure and assure. On-chip governance protocols offer new ways to verify adherence to international treaties, and fine-grained software-enabled safeguards can allow for highly targeted regulatory measures that place the smallest possible burden on industry.

More from GZERO Media

- YouTube

How worried should we be about falling birth rates around the world? For years, experts have been sounding the alarm about overpopulation and the strain on global resources, so why is population decline necessarily a bad thing? On GZERO World with Ian Bremmer, demographic expert Jennifer Sciubba, President & CEO of the Population Reference Bureau, warns governments are “decades behind” in preparing for a future that’s certain to come: one where the global population starts decreasing and societies, on average, are much older.

People gather ahead of a march to the parliament in protest of the Treaty Principles Bill, in Wellington, New Zealand, November 19, 2024.
REUTERS/Lucy Craymer

Over the past few days you might have seen that viral clip of New Zealand lawmakers interrupting a legislative session with a haka -- the foot-stamping, tongue-wagging, eyes-bulging, loud-chanting ceremonial dance of the nation’s indigenous Maori communities.

FILE PHOTO: Robert F. Kennedy Jr. and Republican presidential candidate and former U.S. President Donald Trump greet each other at a campaign event sponsored by conservative group Turning Point USA, in Duluth, Georgia, U.S., October 23, 2024.
REUTERS/Carlos Barria/File Photo/File Photo

With world leaders descending upon Brazil this week for the annual G20 summit, the specter of Donald Trump’s return looms all around.

U.S. Republican presidential candidate Donald Trump holds a copy of the Wall Street Journal while speaking at a Trump for President campaign rally at the Jacksonsville Landing in Jacksonville, Florida.
REUTERS

Donald Trump won the White House on a promise to turn around the US economy. Now, he’s struggling to appoint a lieutenant to tackle the job.

A ragpicker searches for garbage as he walks through railway tracks on a smoggy morning in New Delhi, India on November 4, 2023.

(Photo by Kabir Jhangiani/NurPhoto)

50: Particulate matter in the air over Delhi reached 50 times the safe level on Monday, causing the Indian government to close schools, halt construction, and bar certain trucks from entering the capital.

U.S. Defense Secretary Lloyd Austin poses with Philippine President Ferdinand Marcos Jr during a courtesy call at the Malacanang Palace in Manila, Philippines, November 18, 2024.
Gerard Carreon/Pool via REUTERS

Manila’s top defense official Gilberto Teodoro signed a treaty with the US on Monday that will allow the Philippines to access more closely-held military intelligence and purchase more advanced technology to defend itself from China.

- YouTube

Ian Bremmer's Quick Take: From China to Canada, the world is gearing up for significant strategic shifts under Donald Trump's administration. According to Ian Bremmer, countries are eager to avoid crosswires with the US. In this Quick Take, Ian explains how these geopolitical moves are unfolding.

United States President Joe Biden, right, and US President-elect Donald Trump during a meeting in the Oval Office of the White House in Washington, DC, US, on Wednesday, November 13, 2024.
Reuters

Russian President Vladimir Putin signed a change to Moscow’s nuclear doctrine on Tuesday in response to US President Joe Biden’s decision to lift a ban on Ukraine using US-supplied long-range missiles on targets inside Russia.

- YouTube

On GZERO World, Ian Bremmer sits down with Jennifer Sciubba to explore a looming global crisis: population collapse. With fertility rates below replacement levels in two-thirds of the world, what does this mean for the future of work, healthcare, and retirement systems? In the US, Vice President-Elect JD Vance and Elon Musk are already sounding the alarm, the latter saying it's “a much bigger risk” to civilization than global warming. Can governments do anything to stop it?