Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
How the Department of Homeland Security’s WMD office sees the AI threat
The US Department of Homeland Security is preparing for the worst possible outcomes from the rapid progression of artificial intelligence technology technology. What if powerful AI models are used to help foreign adversaries or terror groups build chemical, biological, radiological, or nuclear weapons?
The department’s Countering Weapons of Mass Destruction office, led by Assistant Secretary Mary Ellen Callahan, issued a report to President Joe Biden that was released to the public in June, with recommendations about how to rein in the worst threats from AI. Among other things, the report recommends building consensus across agencies, developing safe harbor measures to incentivize reporting vulnerabilities to the government without fear of prosecution, and developing new guidelines for handling sensitive scientific data.
We spoke to Callahan about the report, how concerned she actually is, and how her office is using AI to further its own goals while trying to outline the risks of the technology.
This interview has been edited for clarity and length.
GZERO: We profile a lot of AI tools – some benign, some very scary from a privacy or disinformation perspective. But when it comes to chemical, biological, radiological, and nuclear weapons, what do you see as the main threats?
Mary Ellen Callahan: AI is going to lower barriers to entry for all actors, including malign actors. The crux of this report is to look for ways to increase the promise of artificial intelligence, particularly with chemical and biological innovation, while limiting the perils, finding that kind of right balance between the containment of risk and fostering innovation.
We’re talking in one breath about chemical, biological, radiological, and nuclear threats — they’re all very different. Is there one that you’re most concerned about or see as most urgent?
I don’t want to give away too many secrets in terms of where the threats are. Although the task from the president was chemical, biological, radiological, nuclear threats, we focus primarily on chemical and biological threats for two reasons: One, chemical and biological innovation that is fostered by artificial intelligence is further along, and two, chemical and biological formulas and opportunities have already been included in some AI models.
And also because relatedly, the Department of Energy, which has a specialization in radiological and nuclear threats, is doing a separate classified report.
So, that’s less about the severity of the problem and more about what we’ll face soonest, right?
Well, anything that’s a WMD threat is low probability, but high impact. So we’re concerned about these at all times, but in terms of the AI implementation, the chemical and biological are more mature, I’d say.
How has the rise of AI changed the focus of your job? And is there anything about AI that keeps you up at night?
I would actually say that I am more sanguine now, having done a deeper dive into AI. One, we’re early in the stages of artificial intelligence development, and so we can catch this wave earlier. Two, there is a lot of interest and encouragement with regard to the model developers working with us proactively. There are chokepoints: The physical creation of these threats remains hard. How do you take it from ideation to execution? And there are a lot of steps between now and then.
And so what we’re trying to build into this guidance for AI model developers and others is pathway defeat — to try to develop off-ramps where we can defeat the adversaries, maybe early in their stage, maybe early as they are dealing with the ideation, [so they’re] not even able to get a new formula, or maybe at different stages of the development of a threat.
How are you thinking about the threat of open-source AI models that are published online for anyone to access?
We talked a little bit about open-source, but that wasn’t the focus of the report. But, I think that the more important thing to focus on is the sources of the ingestion of the data – as I mentioned, there is already public source data related to biology and to chemistry. And so whether or not it is an open-source model or not, it's the content of the models that I'm more focused on.
How do you feel about the pace of regulation in this country versus the pace of innovation?
We’re not looking at regulations to be a panacea here. What we’re trying to do right now is to make sure that everyone understands they have a stake in making artificial intelligence as safe as possible, and really to develop a culture of responsibility throughout this whole process — using a bunch of different levers. One lever is the voluntary commitments.
Another lever is the current laws. The current US regime between export controls, privacy, technology transfer, intellectual property, all of those can be levers and can be used in different ways. Obviously, we need to work with our international allies and make sure that we are working together on this. I don’t want to reveal too much, but there is interest that there can be some allied response in terms of establishing best practices.
Secretary Alejandro Mayorkas has noted that regulation can be backward-looking and reactive and might not keep up with the pace of technology. So, therefore, we’re not suggesting or asking for any new authorities or regulations in the first instance. But if we identify gaps, we may revisit whether new authorities or laws are needed.
In terms of legislation, do you think you have what you need to do your job? Or are there clear gaps in what’s on the books?
I actually think that the diverse nature of our laws is actually a benefit and we can really leverage and make a lot of progress with even what we have on the books now — export controls, technology transfers, intellectual property, criminal behavior, and obviously if we have CFATS on the books, that would be great — the Chemical Facility Anti-Terrorism Standards from my friends at CISA. But we do have a lot of robust levers that we can use now. And even those voluntary commitments with the model developers saying they want to do it — if they don’t comply with that, there could even be civil penalties related to that.
Can you tell me about the safe harbor measure that your report recommends and how you want that to work?
There are two aspects to the safe harbor. One is having an “if you see something, say something” aspect. So that means people in labs, people who are selling products, people who say stuff like, “that doesn’t ring true.” This standard can be used as a culture of responsibility.
And if somebody does report, then there could be a safe harbor reporting element — whether they’ve done something inadvertently to create a new novel threat, or they’ve noticed something in the pipeline. The safe harbor for abstaining from civil or criminal prosecution — that may need regulation.
Are you using AI at all in your office?
Yep. Actually, we are using AI on a couple of different detection platforms. The Countering Weapons of Mass Destruction Office has the subject matter expertise for CBRN threats here in the department and we provide training, technology, equipment, and detection capability. So we’ve been using algorithms and AI to help refine our algorithms with regard to identifying radiological, nuclear, chemical, and biological threats. And we’re going to continue to use that. We also are using AI as part of our biosurveillance program as well, both in trying to identify if there is a biodetection threat out there, but also if there is information that would indicate a biological threat out there in the country, and we’re trying to use AI to look for that in content.
Let’s end on an optimistic note. Is there anything else that gives you hope about AI factoring into your work?
The promise of AI is extraordinary. It really is going to be a watershed moment for us, and I'm really excited about this. I think thinking about the safety and security of the chemical and biological threats at this moment is exactly the right time. We’ve got to get in there early enough to establish these standards, these protocols, to share guidance, to fold in risk assessments into these calculations for the model developers, but also for the public at large. So I’m fairly bullish on this now.
Hard Numbers: Unnatural gas needs, Google’s data centers, Homeland Security’s new board, Japan’s new LLM
8.5 billion: Rising energy usage from AI data centers could lead to additional demand for natural gas of up to 8.5 billion cubic feet per day, according to an investment bank estimate. Generative AI requires high energy and water demands to power and cool expansive data centers, which climate advocates have warned could exacerbate climate change.
32 billion: Google is pouring $3 billion into data center projects to power its AI system. That budget includes $2 billion for a new data center in Fort Wayne, Ind., and $1 billion to expand three existing ones in Virginia. In earnings reports this week, Google, Meta, and Microsoft disclosed that they had spent $32 billion on data centers and related capital expenditures in the first quarter alone.
22: The US Department of Homeland Security announced a new Artificial Intelligence Safety and Security Board with 22 members including the CEOs of Alphabet (Sundar Pichai), Anthropic (Dario Amodei), OpenAI (Sam Altman), Microsoft (Satya Nadella), and Nvidia (Jensen Huang). The goal: to advise Secretary Alejandro Mayorkas on “safe and secure development and deployment of AI technology in our nation’s critical infrastructure.”
960 million: SoftBank, the Japanese technology conglomerate, plans to pour $960 million to upgrade its computing facilities in the next two years in order to boost its AI capabilities. The company’s broad ambitions include funding and developing a large language model that’s “world-class” and geared specifically toward the Japanese language.Mayorkas impeachment: Reps. Lofgren & Spartz on House vote on DHS secretary
The US House of Representatives is voting on a Republican-led resolution to impeach Homeland Security Secretary Alejandro Mayorkas over his handling of the immigration crisis on the southern border. On GZERO World, Ian Bremmer sat down with Rep. Zoe Lofgren (D-CA) and Rep. Victoria Spartz (R-IN), who both sit on the House Immigration subcommittee, moments before the vote took place for their thoughts on the first impeachment of a cabinet secretary in modern history.
“[The impeachment] has nothing to do with meeting the constitutional standards,” Lofgren, former chair of the Subcommittee on Immigration Integrity, Security, and Enforcement, tells Bremmer, “It’s a complete waste of time.”
House Democrats say the vote is unconstitutional and politically motivated, but the GOP, which has a razor-thin three-vote majority in the House, accuse Mayorkas of a “willful and systemic refusal to comply with the law” and beaching public trust.
“I always believe that ultimate responsibility lays [with] the top executive,” GOP Rep. and Ukrainian American Spartz argues, “We need to send the message that can’t allow executives not to do their duty to the public.”
Watch the full interview on GZERO World with Ian Bremmer on public television beginning this Saturday, February 10. Check local listings.
DHS Secretary Kirstjen Nielsen Resigns: US Politics in 60 Seconds
What will Attorney General William Barr reveal about the Mueller Report when he testifies on the Hill?
I don't think very much he'll defend his summary and say that more will be revealed once the redaction period is over and you can put out the full report. So he'll probably evade a lot of tough questions.
Will DHS go in a tougher direction now that Secretary Nielsen is gone?
Trump certainly hope so. More zero-tolerance policy at the border, fewer asylum refugees let in, and he certainly wants to go much tougher with Secretary Neilsen gone.
Can Dems stop the logjam on emergency aid on Capitol Hill?
Well they'll try with a bill that adds money for disaster relief in the Midwest. But the issue of Puerto Rico disaster funding is still going to be a problem in the Senate. So I'm not sure the logjam is over.
Can the New York State legislature force the release of President Trump's tax returns?
Well they're certainly going to try with a new bill to do that. Democrats control the state so you'd think that they could but there's still a lot of questions about whether this would be a bad precedent to force the release of a single person's tax return. So the effort will be there. I'm not sure it'll be successful.
And go deeper on topics like cybersecurity and artificial intelligence at Microsoft on The Issues.