Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
China says no to AI-powered nukes
In a 90-minute meeting on the sidelines of the APEC conference in Lima, Peru, on Saturday, the two world leaders hashed out the agreement after months of reported resistance from China to engage at all in nuclear arms talks.
In a national security memo last month, the Biden administration explicitly prohibited the use of AI to skirt existing protocols around launching nuclear weapons. But China had resisted making a public declaration until now.
The two countries are locked in a race to build up their respective AI capabilities — and that’s deeply intertwined with their military ambitions. The US, which has a technological edge due to having the largest global chip designers and AI software companies, has enacted strict export controls to keep this technology out of China’s hands. With the Trump administration coming to power in January, it’s unclear how Washington’s China policy will change, though it is expected to be similarly aggressive.Will Israel strike Iran – and if so, how hard?
At least nine people were killed in airstrikes on central Beirut early Thursday as Israel intensified its campaign in Lebanon — while also vowing to retaliate for Iran’s missile attack on Tuesday.
Having eliminated Hassan Nasrallah and much of the terrorist group’s high command, Israel now sees an opportunity to strike while command networks are still reorganizing.
On Thursday, Israel also launched a strike on a West Bank refugee camp, killing at least 14, and nearly 100 others were killed in Gaza airstrikes, where the local health authority says the civilian death toll has surpassed 41,000 since Oct. 7, 2023.
How far will Israel go? Iran attempted to avenge Nasrallah’s assassination by firing a barrage of ballistic missiles against Israel on Tuesday. Israeli Prime Minister Benjamin Netanyahu has sworn to respond — and, given his risk tolerance level of late, speculation abounds as to what he might do.
US President Joe Biden and other G7 leaders, while affirming Israel’s right to respond to the Iranian attack, warned Netanyahu that it should be proportional. The signal: Don’t hit nuclear facilities.
Republican US Sen. Lindsey Graham on Thursday slammed the White House for this, and some Israeli officials have indeed called for strikes on Iran’s nuclear sites, though that remains a remote risk. Those facilities are believed to be well-protected under multiple meters of granite and may prove impossible to fully destroy.
What is more likely — but still escalatory — says Eurasia Group’s Cliff Kupchan is an attack on parts of Iran’s oil industry, the country’s economic lifeline. “The most damaging targets would be oil refineries, oil production facilities, and oil export terminals, in the order of least to most provocative,” he says.
Refineries mostly provide fuel for domestic purposes and might be easier to seal off, depending on how much warning Israel provides. But “if [Iranians] can’t export their 1.5 to 1.6 million barrels a day, they lose critical income,” Kupchan adds.
Early Friday, Iran’s Supreme Leader Ayatollah Ali Khamenei, made a rare public sermon in Tehran. Just three days before the first anniversary, he said the Oct. 7 attack that killed more than 1,200 Israelis was "legitimate" and justified Tehran's attack on Israel on Tuesday. He also called upon Arab nations to unite against Israel, referring to it as their “common enemy.”
Putin's nuclear policy revision is a sign of weakness
Carl Bildt, former prime minister of Sweden and co-chair of the European Council on Foreign Relations, shares his perspective on European politics from the Security Forum in Warsaw, Poland.
What are the implications of the revisions to the nuclear doctrine that President Putin of Russia announced last week?
We don't really know, but I would rather see it as a sign of weakness. If President Putin had confidence in the ability of his conventional forces to achieve the aims that he has set for them in his aggression against Ukraine, he wouldn't need to do this. Does it mean that he's intending to use nuclear weapons? Not necessarily. But he wants to remind us of the fact that it's in his arsenal, and I think we know that already.
What are the implications of the victory of the far-right forces in the Austrian elections?
Well, it is quite worrying, I have to say. The far-right Freedom Party is really far-right and really pro-Russian. They got 29% of the vote, a record for them. And whether they will be able to form a government, I think the other parties will try to form a government against them. I think it's going to take a couple of months to see whether that succeeds or not. But the worrying thing is, of course, that there's a risk of the formation of a far-right, nationalist, more neutralistic, you could say, bloc in Central Europe. We already have the Hungarians under Viktor Orbán nearby. We have Bratislava with the government there. There's an election coming up in the Czech Republic next year. So I'm not entirely comfortable, to put it mildly, with what we have seen coming out of the Austrian election.
Zelensky vies for the world’s attention at UN, as Russia resets its nuclear red line
UNITED NATIONS – When Volodymyr Zelensky addressed the sparsely populated United Nations General Assembly Hall on Wednesday, he cast Russia’s war in Ukraine as a threat to Europe and beyond – warning of the rising threat of nuclear disaster, and the potential for the war to spread into Eastern Europe. The GA’s empty chairs reflected the problem Zelensky’s speech sought to address: As the situation in the Middle East spirals out of control, much of the world’s attention has turned away from Ukraine.
The 46-year-old leader condemned Russia’s targeting of his country’s infrastructure grid, saying that “80% of its energy system [is] gone.” He said the attacks have not only exposed millions of Ukrainians to a brutal winter without electricity, but also put Ukraine’s nuclear power plants at risk. Zelensky accused Putin of trying to disconnect its Zaporizhzia nuclear plant from the power grid, putting Europe one drone strike away from “nuclear disaster” where “radiation will not respect state orders.”
Zelensky has used this week’s trip to outline a “victory plan” – a plan that includes more money and firing US-made long-range weapons deep inside Russia to strengthen Ukraine’s position enough to force Russia to the negotiating table. Aware that battlefield help won’t be enough to force Vladimir Putin to cut a deal over Ukraine’s future, Zelensky has also asked the West to use economic, political, and diplomatic pressure to force Russia to the bargaining table.
Biden and Zelensky will meet to discuss the plan on Thursday. In anticipation, Russia updated its nuclear doctrine on Wednesday to say it should be able to use nuclear weapons if attacked by a state backed by nuclear power, although this “red-line” rhetoric may just be meant to deter Biden from granting Ukraine use of the long-range weapons.
It’s tempting to ignore yet another Russian threat of retaliation. After all, Ukrainian troops have invaded and now occupy about 500 square miles of Russian land. If Ukraine’s surprise invasion of Russia and the inability of Russian forces so far to push them out hasn’t provoked a deadly escalation from Moscow, what Russian action should Western governments fear?
But the Kremlin spokesman stands on firmer ground in arguing that it’s “impossible to force Russia into peace.” Putin has staked all of his personal political credibility on the restoration of Russian control over Ukraine, and he has reason to believe Russia can still win the war.
Today, there’s no evidence that Putin faces any internal threat to his leadership. But negotiating away land he claims is part of Russia could expose the aging president to challenges from within Russia’s political and economic elite.
China’s nuclear noncommitment
Today, global delegates to the Responsible AI in the Military Domain Summit in Seoul adopted a non-binding agreement promising to keep nuclear weapons solely in human control — and not under the control of artificial intelligence.
60 of the 100 countries in attendance adopted the “blueprint for action,” a pledge to “maintain human control and involvement for all actions … concerning nuclear weapons employment.”
Notably, China did not sign the agreement. The White House previously disclosed that China has refused to commit to limiting the decision-making around nukes to humans. “Our position has been publicly clear for a long time: We don’t think that autonomous systems should be getting near any decision to launch a nuclear weapon. That’s a long-stated U.S. policy, said US National Security Council Director of Technology Tarun Chhabra earlier this summer. “We think all countries around the world should sign up to that.” (Russia was not invited to the summit due to its invasion of Ukraine.)
Experts previously told GZERO AI that China has signaled to the UN that nuclear weapons should never be used in war and that powerful weapons systems should stay under human control, but Beijing has shied away from official and direct pronouncements on the matter.Washington tries to reassure Beijing over nuclear strategy
The White House on Wednesday tried to ease Beijing’s “serious concerns” over reports that the US is adjusting its nuclear strategy to incorporate more of a focus on East Asia. The US National Security Council said it “is not a response to any single entity, country, nor threat” and that North Korea and Russia factor into the shift.
China’s foreign ministry said “the United States has constantly stirred up the so-called China nuclear threat theory in recent years.” China objects because it has always maintained a no-first-use policy with its nuclear weapons, and its arsenal is small compared to Washington’s 3,700 warheads.
Still, China has been arming – the Pentagon estimated last year that Beijing now has 500 warheads and may reach 1,000 by 2030. But from China’s point of view, that’s just playing catch up, says Eurasia Group’s Jeremy Chan. “Beijing sees US rhetoric about arms control as an effort to lock in the Chinese arsenal at a level that is still a fraction of Washington’s,” he explained. “They want to play for more time.”
US and Chinese interests may not be entirely misaligned. Chinese President Xi Jinping reportedly told Russian President Vladimir Putin, for example,to quit threatening to use nuclear weapons in Ukraine when the two met in Moscow last year, and Chan notes that nuclear proliferation is one of the few issues where Beijing won’t cover for North Korea.How the Department of Homeland Security’s WMD office sees the AI threat
The US Department of Homeland Security is preparing for the worst possible outcomes from the rapid progression of artificial intelligence technology technology. What if powerful AI models are used to help foreign adversaries or terror groups build chemical, biological, radiological, or nuclear weapons?
The department’s Countering Weapons of Mass Destruction office, led by Assistant Secretary Mary Ellen Callahan, issued a report to President Joe Biden that was released to the public in June, with recommendations about how to rein in the worst threats from AI. Among other things, the report recommends building consensus across agencies, developing safe harbor measures to incentivize reporting vulnerabilities to the government without fear of prosecution, and developing new guidelines for handling sensitive scientific data.
We spoke to Callahan about the report, how concerned she actually is, and how her office is using AI to further its own goals while trying to outline the risks of the technology.
This interview has been edited for clarity and length.
GZERO: We profile a lot of AI tools – some benign, some very scary from a privacy or disinformation perspective. But when it comes to chemical, biological, radiological, and nuclear weapons, what do you see as the main threats?
Mary Ellen Callahan: AI is going to lower barriers to entry for all actors, including malign actors. The crux of this report is to look for ways to increase the promise of artificial intelligence, particularly with chemical and biological innovation, while limiting the perils, finding that kind of right balance between the containment of risk and fostering innovation.
We’re talking in one breath about chemical, biological, radiological, and nuclear threats — they’re all very different. Is there one that you’re most concerned about or see as most urgent?
I don’t want to give away too many secrets in terms of where the threats are. Although the task from the president was chemical, biological, radiological, nuclear threats, we focus primarily on chemical and biological threats for two reasons: One, chemical and biological innovation that is fostered by artificial intelligence is further along, and two, chemical and biological formulas and opportunities have already been included in some AI models.
And also because relatedly, the Department of Energy, which has a specialization in radiological and nuclear threats, is doing a separate classified report.
So, that’s less about the severity of the problem and more about what we’ll face soonest, right?
Well, anything that’s a WMD threat is low probability, but high impact. So we’re concerned about these at all times, but in terms of the AI implementation, the chemical and biological are more mature, I’d say.
How has the rise of AI changed the focus of your job? And is there anything about AI that keeps you up at night?
I would actually say that I am more sanguine now, having done a deeper dive into AI. One, we’re early in the stages of artificial intelligence development, and so we can catch this wave earlier. Two, there is a lot of interest and encouragement with regard to the model developers working with us proactively. There are chokepoints: The physical creation of these threats remains hard. How do you take it from ideation to execution? And there are a lot of steps between now and then.
And so what we’re trying to build into this guidance for AI model developers and others is pathway defeat — to try to develop off-ramps where we can defeat the adversaries, maybe early in their stage, maybe early as they are dealing with the ideation, [so they’re] not even able to get a new formula, or maybe at different stages of the development of a threat.
How are you thinking about the threat of open-source AI models that are published online for anyone to access?
We talked a little bit about open-source, but that wasn’t the focus of the report. But, I think that the more important thing to focus on is the sources of the ingestion of the data – as I mentioned, there is already public source data related to biology and to chemistry. And so whether or not it is an open-source model or not, it's the content of the models that I'm more focused on.
How do you feel about the pace of regulation in this country versus the pace of innovation?
We’re not looking at regulations to be a panacea here. What we’re trying to do right now is to make sure that everyone understands they have a stake in making artificial intelligence as safe as possible, and really to develop a culture of responsibility throughout this whole process — using a bunch of different levers. One lever is the voluntary commitments.
Another lever is the current laws. The current US regime between export controls, privacy, technology transfer, intellectual property, all of those can be levers and can be used in different ways. Obviously, we need to work with our international allies and make sure that we are working together on this. I don’t want to reveal too much, but there is interest that there can be some allied response in terms of establishing best practices.
Secretary Alejandro Mayorkas has noted that regulation can be backward-looking and reactive and might not keep up with the pace of technology. So, therefore, we’re not suggesting or asking for any new authorities or regulations in the first instance. But if we identify gaps, we may revisit whether new authorities or laws are needed.
In terms of legislation, do you think you have what you need to do your job? Or are there clear gaps in what’s on the books?
I actually think that the diverse nature of our laws is actually a benefit and we can really leverage and make a lot of progress with even what we have on the books now — export controls, technology transfers, intellectual property, criminal behavior, and obviously if we have CFATS on the books, that would be great — the Chemical Facility Anti-Terrorism Standards from my friends at CISA. But we do have a lot of robust levers that we can use now. And even those voluntary commitments with the model developers saying they want to do it — if they don’t comply with that, there could even be civil penalties related to that.
Can you tell me about the safe harbor measure that your report recommends and how you want that to work?
There are two aspects to the safe harbor. One is having an “if you see something, say something” aspect. So that means people in labs, people who are selling products, people who say stuff like, “that doesn’t ring true.” This standard can be used as a culture of responsibility.
And if somebody does report, then there could be a safe harbor reporting element — whether they’ve done something inadvertently to create a new novel threat, or they’ve noticed something in the pipeline. The safe harbor for abstaining from civil or criminal prosecution — that may need regulation.
Are you using AI at all in your office?
Yep. Actually, we are using AI on a couple of different detection platforms. The Countering Weapons of Mass Destruction Office has the subject matter expertise for CBRN threats here in the department and we provide training, technology, equipment, and detection capability. So we’ve been using algorithms and AI to help refine our algorithms with regard to identifying radiological, nuclear, chemical, and biological threats. And we’re going to continue to use that. We also are using AI as part of our biosurveillance program as well, both in trying to identify if there is a biodetection threat out there, but also if there is information that would indicate a biological threat out there in the country, and we’re trying to use AI to look for that in content.
Let’s end on an optimistic note. Is there anything else that gives you hope about AI factoring into your work?
The promise of AI is extraordinary. It really is going to be a watershed moment for us, and I'm really excited about this. I think thinking about the safety and security of the chemical and biological threats at this moment is exactly the right time. We’ve got to get in there early enough to establish these standards, these protocols, to share guidance, to fold in risk assessments into these calculations for the model developers, but also for the public at large. So I’m fairly bullish on this now.
Graphic Truth: Big bombs get big budgets in 2023
The world’s nuclear powers increased their spending on these apocalyptic weapons by a record 13% between 2022 and 2023, according to the International Campaign to Abolish Nuclear Weapons. Cumulatively, they spent a cool $91.4 billion on building, maintaining, and researching nuclear weapons.
Well over half of that spending came from the United States, to the tune of $51 billion. The next highest spenders were China and Russia, with comparatively frugal expenditures of $11 billion and $8 billion, respectively. The increases were not driven by building new weapons — arsenal levels remained fairly stable, according to a different study by the Stockholm International Peace Research Institute — but instead by developing new technology to target and launch the weapons.
The US and UK, which saw the largest increases in nuclear spending, are developing new rockets and submarines that they hope will help deter attacks. The US, UK, Russia, China, France, India, and North Korea are also reportedly developing so-called hypersonic missiles, which can travel over five times the speed of sound to avoid interception.
That amount of spending comes to $2,898 every second — roughly what the average global household makes in three months. As if spending vast amounts on weapons that could effectively end the world in about two hours wasn't tragic enough, in countries like North Korea and Pakistan, endemic poverty and economic stagnation mean every dollar spent on nukes is one less spent on food, fuel, and medicine.