Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Is AI responsible for a teen’s suicide?
Moments before Sewell Setzer III took his own life in February 2024, he was messaging with an AI chatbot. Setzer, a 14-year-old boy from Florida, had struck up an intimate and troubling relationship — if you can call it that — with an artificial intelligence application styled to simulate the personality of “Game of Thrones” character Daenerys Targaryen.
Setzer gave numerous indications to the chatbot, developed by a company called Character.AI, that he was actively suicidal. At no point did the chatbot break character, provide mental health support hotlines, or do anything to prevent the teen from harming himself, according to a wrongful death lawsuit filed by Setzer’s family last week. The company has since said that it has added protections to its app in the past six months, including a pop-up notification with the suicide hotline. But that’s a feature that’s been standard across search engines and social media platforms for years.
The lawsuit, filed in federal court in Orlando, also names Google as a defendant. The Big Tech company hired Character.AI’s leadership team and paid to license its technology in August, the latest in a spate of so-called acqui-hires in the AI industry. The lawsuit alleges that Google is a “co-creator” of Character.AI since its founders initially developed the technology while working there years earlier.
It’s unclear what legal liability Character.AI will have. Section 230 of the Communications Decency Act, which largely protects internet companies from civil suits, is untested when it comes to AI chatbots because it protects companies from speech posted by third parties. In the case of AI chatbots, the speech is directly from an AI company, so many experts have predicted that it won’t apply in cases like this.
National safety institutes — assemble!
The Biden administration announced that it will host a global safety summit on artificial intelligence on Nov. 20-21 in San Francisco. The International Network of AI Safety Institutes, which was formed at the AI Safety Summit in Seoul in May, will bring together safety experts from each member country’s AI safety institute. The current member countries are Australia, Canada, the European Union, France, Japan, Kenya, Singapore, South Korea, the United Kingdom, and the United States.
The aim? “Strengthening international collaboration on AI safety is critical to harnessing AI technology to solve the world’s greatest challenges,” Secretary of State Antony Blinken said in a statement.
Commerce Secretary Gina Raimondo, co-hosting the event with Blinken, said that the US is committed to “pulling every lever” on AI regulation. “That includes close, thoughtful coordination with our allies and like-minded partners.”American and Chinese companies set new standards
It’s not every day that companies from the United States and China work together. But on Sept. 6, a new coalition of big tech companies representing both global powers announced that they have joined forces to develop new security standards for large language models.
The companies include China’s Ant Group, Tencent, and Baidu along with US firms Microsoft, Google, and Meta. The effort is part of the World Digital Technology Academy, a Geneva-based group established in 2023 under a United Nations framework. The efforts aim to reduce risks throughout the AI supply chain, such as protecting against data leaks and model tampering.
The collaboration represents a rare collaboration between American and Chinese companies at a time when their respective governments are battling over AI dominance and control while systematically blocking one another’s companies from accessing key technologies. While it’s unlikely that this partnership will ease tensions between the American and Chinese governments, perhaps it’ll help forge a way for future collaboration between their industries.
How the Department of Homeland Security’s WMD office sees the AI threat
The US Department of Homeland Security is preparing for the worst possible outcomes from the rapid progression of artificial intelligence technology technology. What if powerful AI models are used to help foreign adversaries or terror groups build chemical, biological, radiological, or nuclear weapons?
The department’s Countering Weapons of Mass Destruction office, led by Assistant Secretary Mary Ellen Callahan, issued a report to President Joe Biden that was released to the public in June, with recommendations about how to rein in the worst threats from AI. Among other things, the report recommends building consensus across agencies, developing safe harbor measures to incentivize reporting vulnerabilities to the government without fear of prosecution, and developing new guidelines for handling sensitive scientific data.
We spoke to Callahan about the report, how concerned she actually is, and how her office is using AI to further its own goals while trying to outline the risks of the technology.
This interview has been edited for clarity and length.
GZERO: We profile a lot of AI tools – some benign, some very scary from a privacy or disinformation perspective. But when it comes to chemical, biological, radiological, and nuclear weapons, what do you see as the main threats?
Mary Ellen Callahan: AI is going to lower barriers to entry for all actors, including malign actors. The crux of this report is to look for ways to increase the promise of artificial intelligence, particularly with chemical and biological innovation, while limiting the perils, finding that kind of right balance between the containment of risk and fostering innovation.
We’re talking in one breath about chemical, biological, radiological, and nuclear threats — they’re all very different. Is there one that you’re most concerned about or see as most urgent?
I don’t want to give away too many secrets in terms of where the threats are. Although the task from the president was chemical, biological, radiological, nuclear threats, we focus primarily on chemical and biological threats for two reasons: One, chemical and biological innovation that is fostered by artificial intelligence is further along, and two, chemical and biological formulas and opportunities have already been included in some AI models.
And also because relatedly, the Department of Energy, which has a specialization in radiological and nuclear threats, is doing a separate classified report.
So, that’s less about the severity of the problem and more about what we’ll face soonest, right?
Well, anything that’s a WMD threat is low probability, but high impact. So we’re concerned about these at all times, but in terms of the AI implementation, the chemical and biological are more mature, I’d say.
How has the rise of AI changed the focus of your job? And is there anything about AI that keeps you up at night?
I would actually say that I am more sanguine now, having done a deeper dive into AI. One, we’re early in the stages of artificial intelligence development, and so we can catch this wave earlier. Two, there is a lot of interest and encouragement with regard to the model developers working with us proactively. There are chokepoints: The physical creation of these threats remains hard. How do you take it from ideation to execution? And there are a lot of steps between now and then.
And so what we’re trying to build into this guidance for AI model developers and others is pathway defeat — to try to develop off-ramps where we can defeat the adversaries, maybe early in their stage, maybe early as they are dealing with the ideation, [so they’re] not even able to get a new formula, or maybe at different stages of the development of a threat.
How are you thinking about the threat of open-source AI models that are published online for anyone to access?
We talked a little bit about open-source, but that wasn’t the focus of the report. But, I think that the more important thing to focus on is the sources of the ingestion of the data – as I mentioned, there is already public source data related to biology and to chemistry. And so whether or not it is an open-source model or not, it's the content of the models that I'm more focused on.
How do you feel about the pace of regulation in this country versus the pace of innovation?
We’re not looking at regulations to be a panacea here. What we’re trying to do right now is to make sure that everyone understands they have a stake in making artificial intelligence as safe as possible, and really to develop a culture of responsibility throughout this whole process — using a bunch of different levers. One lever is the voluntary commitments.
Another lever is the current laws. The current US regime between export controls, privacy, technology transfer, intellectual property, all of those can be levers and can be used in different ways. Obviously, we need to work with our international allies and make sure that we are working together on this. I don’t want to reveal too much, but there is interest that there can be some allied response in terms of establishing best practices.
Secretary Alejandro Mayorkas has noted that regulation can be backward-looking and reactive and might not keep up with the pace of technology. So, therefore, we’re not suggesting or asking for any new authorities or regulations in the first instance. But if we identify gaps, we may revisit whether new authorities or laws are needed.
In terms of legislation, do you think you have what you need to do your job? Or are there clear gaps in what’s on the books?
I actually think that the diverse nature of our laws is actually a benefit and we can really leverage and make a lot of progress with even what we have on the books now — export controls, technology transfers, intellectual property, criminal behavior, and obviously if we have CFATS on the books, that would be great — the Chemical Facility Anti-Terrorism Standards from my friends at CISA. But we do have a lot of robust levers that we can use now. And even those voluntary commitments with the model developers saying they want to do it — if they don’t comply with that, there could even be civil penalties related to that.
Can you tell me about the safe harbor measure that your report recommends and how you want that to work?
There are two aspects to the safe harbor. One is having an “if you see something, say something” aspect. So that means people in labs, people who are selling products, people who say stuff like, “that doesn’t ring true.” This standard can be used as a culture of responsibility.
And if somebody does report, then there could be a safe harbor reporting element — whether they’ve done something inadvertently to create a new novel threat, or they’ve noticed something in the pipeline. The safe harbor for abstaining from civil or criminal prosecution — that may need regulation.
Are you using AI at all in your office?
Yep. Actually, we are using AI on a couple of different detection platforms. The Countering Weapons of Mass Destruction Office has the subject matter expertise for CBRN threats here in the department and we provide training, technology, equipment, and detection capability. So we’ve been using algorithms and AI to help refine our algorithms with regard to identifying radiological, nuclear, chemical, and biological threats. And we’re going to continue to use that. We also are using AI as part of our biosurveillance program as well, both in trying to identify if there is a biodetection threat out there, but also if there is information that would indicate a biological threat out there in the country, and we’re trying to use AI to look for that in content.
Let’s end on an optimistic note. Is there anything else that gives you hope about AI factoring into your work?
The promise of AI is extraordinary. It really is going to be a watershed moment for us, and I'm really excited about this. I think thinking about the safety and security of the chemical and biological threats at this moment is exactly the right time. We’ve got to get in there early enough to establish these standards, these protocols, to share guidance, to fold in risk assessments into these calculations for the model developers, but also for the public at large. So I’m fairly bullish on this now.
What is “safe” superintelligence?
OpenAI co-founder and chief scientist Ilya Sutskever has announced a new startup called Safe Superintelligence. You might remember Sutskever as one of the board members who unsuccessfully tried to oust Sam Altman last November. He has since apologized and hung around OpenAI before departing in May.
Little is known about the new company — including how it’s funded — but its name has inspired debate about what’s involved in building a safe superintelligent AI system. “By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” Sutskever disclosed. (‘Trust and safety’ is typically what internet companies call their content moderation teams.)
Sutskever said that he won’t actually build products en route to superintelligence — so no ChatGPT competitor is coming your way.
“This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” Sutskever told Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.”
Sutskever also hasn’t said what exactly he wants this superintelligence to do though he said he wants it to be more than a smart conversationalist and to help people with more ambitious tasks. But building the underlying tech and keeping it “safe” seems to be his only stated priority.
Sutskever’s view is still rather existentialist — as in, will the AI kill us all or not? Is it still a safe system if it perpetuates racial bias, hallucinates answers, or deceives users? Surely there should be better safeguards than,“Keep the AI away from our nukes!”Avoiding extinction: A Q&A with Gladstone AI’s Jeremie Harris
In November 2022, the US Department of State commissioned a comprehensive report on the risks of artificial intelligence. The government turned to Gladstone AI, a four-person firm founded the year before to write such reports and brief government officials on matters concerning AI safety.
Gladstone AI interviewed more than 200 people working in and around AI about what risks keep them up at night. Their report, titled “Defense in Depth: An Action Plan to Increase the Safety and Security of Advanced AI,” released to the public on March 11.
The short version? It’s pretty dire: “The recent explosion of progress in advanced artificial intelligence has brought great opportunities, but it is also creating entirely new categories of weapons of mass destruction-like and WMD-enabling catastrophic risks.” Next to the words “catastrophic risks” is a particularly worrying footnote: “By catastrophic risks, we mean risks of catastrophic events up to and including events that would lead to human extinction.”
With all that in mind, GZERO spoke to Jeremie Harris, co-founder and CEO of Gladstone AI, about how this report came to be and how we should rewire our thinking about the risks posed by AI.
This interview has been edited for clarity and length.
GZERO: What is Gladstone and how did the opportunity to write this report come about?
Jeremie Harris: After GPT-3 came out in 2020, we assessed that the key principle behind it might be extensible enough that we should expect a radical acceleration in AI capabilities. Our views were shaped by our technical expertise in AI (we'd founded a now-acquired AI company in 2016), and by our conversations with friends at the frontier labs, including OpenAI itself.
By then, it was already clear that a ChatGPT moment was coming, and that the US government needed to be brought up to speed. We briefed a wide range of stakeholders, from cabinet secretaries to working-level action officers on the new AI landscape. A year before ChatGPT was released, we happened upon a team at the State Department that recognized the importance of AI scaling up with larger, more powerful models. They decided to commission an assessment of that risk set a month before ChatGPT launched, and we were awarded the contract.
You interviewed 200 experts. How did you determine who to talk to and who to take most seriously?
Harris: We knew who the field's key contributors were, and had spoken to many of them personally.
Our approach was to identify and engage all of the key pockets of informed opinion on these issues, from leadership to AI risk skeptics, to concerned researchers. We spoke to members of the executive, policy, safety, and capabilities teams at top labs. In addition, we held on-site engagements with researchers at top academic institutions in the US and U.K., as well as with AI auditing companies and civil society groups.
We also knew that we needed to account for the unique perspective of the US government's national security community, which has a long history of dealing with new emerging technologies and WMD-like risks. We held unprecedented workshops that brought together representatives and WMD experts from across the US interagency to discuss AI and its national security risks, and had them red-team our recommendations and analysis.
What do you want the average person to know about what you found?
Harris: AI has already helped us make amazing breakthroughs in fields like materials science and medicine. The technology’s promise is real. Unfortunately, the same capabilities that create that promise also create risks, and although we can't be certain, a significant and growing body of data does suggest that these risks could lead to WMD-scale effects if they're not properly managed. The question isn't how do we stop AI development, but rather, how can we implement common-sense safeguards that AI researchers themselves are often calling for, so that we can reap the immense benefits.
Our readership is (hopefully) more informed than the average person about AI. What should they take away from the report?
Harris: Top AI labs are currently locked in a race on the path to human-level AI, or AGI. This competitive dynamic erodes the margins that they otherwise might be investing in developing and implementing safety measures, at a time when we lack the technical means to ensure that AGI-level systems can be controlled or prevented from being weaponized. Compounding this challenge is the geopolitics of AI development, as other countries develop their own domestic AI programs.
This problem can be solved. The action plan lays out a way to stabilize the racing dynamics playing out at the frontier of the field; strengthen the US government's ability to detect and respond to AI incidents; and scale AI development safely domestically and internationally.
We suggest leveraging existing authorities, identifying requirements for new legal regimes when appropriate, and highlighting new technical options for AI governance that make domestic and international safeguards much easier to implement.
What is the most surprising—or alarming—thing you encountered in putting this report together?
Harris: From speaking to frontier researchers, it was clear that labs are under significant pressure to accelerate their work and build more powerful systems, and this increasingly involves hiring staff who are more interested in pushing forward capabilities as opposed to addressing risks. This has created a significant opportunity: many frontier lab executives and staff want to take a more balanced approach. As a result, the government has a window to introduce common-sense safeguards that would be welcomed not only by the public, but by important elements within frontier labs themselves.
Have anything to make us feel good about where things are headed?
Harris: Absolutely. If we can solve for the risk side of the equation, AI offers enormous promise. And there really are solutions to these problems. They require bold action, but that's not unprecedented: we've had to deal with catastrophic national security risks before, from biotechnology to nuclear weapons.
AI is a different kind of challenge, but it also comes with technical levers that can make it easier to secure and assure. On-chip governance protocols offer new ways to verify adherence to international treaties, and fine-grained software-enabled safeguards can allow for highly targeted regulatory measures that place the smallest possible burden on industry.
Biden preaches AI safety
The group includes large tech companies like Amazon, Meta, and Microsoft; AI-focused startups like Anthropic and OpenAI; along with government contractors, advocacy groups, research labs, and universities.
The Biden administration, which is working to implement the many provisions of the executive order, previously secured voluntary commitments from major AI firms to mitigate the worst harms possible in the development of AI.
While the government is slow to pass laws and implement executive action, engaging with the private sector directly can be a productive first step toward rolling out a new regulatory regime to rein in this emerging set of technologies. The administration recently met a series of deadlines from the wide-ranging order and has begun to offer updates, such as the new know-your-customer rules for AI firms.Grown-up AI conversations are finally happening, says expert Azeem Azhar
“The thing that’s surprised me most is how well CEOs are [now] articulating generative AI, this technology that’s only been public for a year or so,” Azhar says,” “I’ve never experienced that in my life and didn’t realize how quickly they’ve moved.”
Azhar and Bremmer also discuss the underlying technology that’s allowed generative AI tools like ChatGPT-4 to advance so quickly and where conversations about applications of artificial intelligence go from here. Whereas a year ago, experts were focused on the macro implications of existential risk, Azhar is excited this year to hear people focus on practical things like copyright and regulation—the small yet impactful things that move the economy and change how we live our lives.