Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Chinese telecom hack sparks national security fears
A group of hackers with backing from the Chinese government broke past the security of multiple US telecom firms, including AT&T and Verizon, and potentially accessed data used by law enforcement officials. Specifically, the hackers appear to have targeted information about court-authorized wiretaps, which could be related to multiple ongoing cases in the US concerning Chinese government agents intimidating and harassing people in the US.
The hack was carried out by a group known as Salt Typhoon, one of many such units used by the Chinese government to infiltrate overseas networks. Investigators from Microsoft and a Google subsidiary have been helping investigate the breach alongside the FBI, whose cybersecurity agents are reportedly outnumbered by their Chinese opponents 50:1.
Will the hack undermine US-China relations? Both sides have been trying to keep tensions under control — largely successfully — all year, but this incident may be too awkward to smooth over. China’s Embassy in Washington, DC, denied the hack and accused the US of “politicizing cybersecurity issues to smear China,” and the FBI and DOJ have not commented. We’re watching how the fallout might affect a notional Biden-Xi phone call the White House has reportedly been attempting to arrange.
"The next 50 years belong to Alaska" — An interview with Gov. Mike Dunleavy
Listen: On the GZERO World Podcast, Ian Bremmer sits with Alaska Governor Mike Dunleavy to explore the state’s pivotal role in America’s energy, technology, and national security. Alaska sits at the heart of some of America's thorniest geopolitical challenges. Its renewable resources, natural gas, rare earth minerals, and freshwater make it a critical part of the country's energy and technology futures, while its strategic location near Russia and China underscores its geopolitical importance. No one understands better than Alaska Governor Mike Dunleavy, who drills into Alaska's energy and economic potential and discusses US national security concerns within a melting Arctic on the GZERO World Podcast.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, or your preferred podcast platform, to receive new episodes as soon as they're published.How the Department of Homeland Security’s WMD office sees the AI threat
The US Department of Homeland Security is preparing for the worst possible outcomes from the rapid progression of artificial intelligence technology technology. What if powerful AI models are used to help foreign adversaries or terror groups build chemical, biological, radiological, or nuclear weapons?
The department’s Countering Weapons of Mass Destruction office, led by Assistant Secretary Mary Ellen Callahan, issued a report to President Joe Biden that was released to the public in June, with recommendations about how to rein in the worst threats from AI. Among other things, the report recommends building consensus across agencies, developing safe harbor measures to incentivize reporting vulnerabilities to the government without fear of prosecution, and developing new guidelines for handling sensitive scientific data.
We spoke to Callahan about the report, how concerned she actually is, and how her office is using AI to further its own goals while trying to outline the risks of the technology.
This interview has been edited for clarity and length.
GZERO: We profile a lot of AI tools – some benign, some very scary from a privacy or disinformation perspective. But when it comes to chemical, biological, radiological, and nuclear weapons, what do you see as the main threats?
Mary Ellen Callahan: AI is going to lower barriers to entry for all actors, including malign actors. The crux of this report is to look for ways to increase the promise of artificial intelligence, particularly with chemical and biological innovation, while limiting the perils, finding that kind of right balance between the containment of risk and fostering innovation.
We’re talking in one breath about chemical, biological, radiological, and nuclear threats — they’re all very different. Is there one that you’re most concerned about or see as most urgent?
I don’t want to give away too many secrets in terms of where the threats are. Although the task from the president was chemical, biological, radiological, nuclear threats, we focus primarily on chemical and biological threats for two reasons: One, chemical and biological innovation that is fostered by artificial intelligence is further along, and two, chemical and biological formulas and opportunities have already been included in some AI models.
And also because relatedly, the Department of Energy, which has a specialization in radiological and nuclear threats, is doing a separate classified report.
So, that’s less about the severity of the problem and more about what we’ll face soonest, right?
Well, anything that’s a WMD threat is low probability, but high impact. So we’re concerned about these at all times, but in terms of the AI implementation, the chemical and biological are more mature, I’d say.
How has the rise of AI changed the focus of your job? And is there anything about AI that keeps you up at night?
I would actually say that I am more sanguine now, having done a deeper dive into AI. One, we’re early in the stages of artificial intelligence development, and so we can catch this wave earlier. Two, there is a lot of interest and encouragement with regard to the model developers working with us proactively. There are chokepoints: The physical creation of these threats remains hard. How do you take it from ideation to execution? And there are a lot of steps between now and then.
And so what we’re trying to build into this guidance for AI model developers and others is pathway defeat — to try to develop off-ramps where we can defeat the adversaries, maybe early in their stage, maybe early as they are dealing with the ideation, [so they’re] not even able to get a new formula, or maybe at different stages of the development of a threat.
How are you thinking about the threat of open-source AI models that are published online for anyone to access?
We talked a little bit about open-source, but that wasn’t the focus of the report. But, I think that the more important thing to focus on is the sources of the ingestion of the data – as I mentioned, there is already public source data related to biology and to chemistry. And so whether or not it is an open-source model or not, it's the content of the models that I'm more focused on.
How do you feel about the pace of regulation in this country versus the pace of innovation?
We’re not looking at regulations to be a panacea here. What we’re trying to do right now is to make sure that everyone understands they have a stake in making artificial intelligence as safe as possible, and really to develop a culture of responsibility throughout this whole process — using a bunch of different levers. One lever is the voluntary commitments.
Another lever is the current laws. The current US regime between export controls, privacy, technology transfer, intellectual property, all of those can be levers and can be used in different ways. Obviously, we need to work with our international allies and make sure that we are working together on this. I don’t want to reveal too much, but there is interest that there can be some allied response in terms of establishing best practices.
Secretary Alejandro Mayorkas has noted that regulation can be backward-looking and reactive and might not keep up with the pace of technology. So, therefore, we’re not suggesting or asking for any new authorities or regulations in the first instance. But if we identify gaps, we may revisit whether new authorities or laws are needed.
In terms of legislation, do you think you have what you need to do your job? Or are there clear gaps in what’s on the books?
I actually think that the diverse nature of our laws is actually a benefit and we can really leverage and make a lot of progress with even what we have on the books now — export controls, technology transfers, intellectual property, criminal behavior, and obviously if we have CFATS on the books, that would be great — the Chemical Facility Anti-Terrorism Standards from my friends at CISA. But we do have a lot of robust levers that we can use now. And even those voluntary commitments with the model developers saying they want to do it — if they don’t comply with that, there could even be civil penalties related to that.
Can you tell me about the safe harbor measure that your report recommends and how you want that to work?
There are two aspects to the safe harbor. One is having an “if you see something, say something” aspect. So that means people in labs, people who are selling products, people who say stuff like, “that doesn’t ring true.” This standard can be used as a culture of responsibility.
And if somebody does report, then there could be a safe harbor reporting element — whether they’ve done something inadvertently to create a new novel threat, or they’ve noticed something in the pipeline. The safe harbor for abstaining from civil or criminal prosecution — that may need regulation.
Are you using AI at all in your office?
Yep. Actually, we are using AI on a couple of different detection platforms. The Countering Weapons of Mass Destruction Office has the subject matter expertise for CBRN threats here in the department and we provide training, technology, equipment, and detection capability. So we’ve been using algorithms and AI to help refine our algorithms with regard to identifying radiological, nuclear, chemical, and biological threats. And we’re going to continue to use that. We also are using AI as part of our biosurveillance program as well, both in trying to identify if there is a biodetection threat out there, but also if there is information that would indicate a biological threat out there in the country, and we’re trying to use AI to look for that in content.
Let’s end on an optimistic note. Is there anything else that gives you hope about AI factoring into your work?
The promise of AI is extraordinary. It really is going to be a watershed moment for us, and I'm really excited about this. I think thinking about the safety and security of the chemical and biological threats at this moment is exactly the right time. We’ve got to get in there early enough to establish these standards, these protocols, to share guidance, to fold in risk assessments into these calculations for the model developers, but also for the public at large. So I’m fairly bullish on this now.
Hong Kong passes harsh national security law
Letter of the law. It allows authorities to detain people without charge for up to 16 days, conduct closed-door trials, and ban companies found to be “working for foreign forces.” But the devil is in the (lack of) details: The bill closely imitates Beijing’s state secrets law, with a broad definition of what might constitute theft or espionage.
It also introduces the new offense of “external interference.” Anyone found collaborating with loosely defined “external forces” could face charges.
What does it mean for Hong Kong? Chief Executive John Lee said the law – set to go into effect on Saturday – was necessary to halt unrest and root out “espionage activities.” He said passing it quickly will now allow his government to focus on economic growth, a key concern.
Hong Kong has long been known for an open business climate, but the laissez-faire vibes are fading after the harsh crackdowns on protests that broke out over an extradition law in 2019. Many of the city’s best and brightest have gone abroad, and multinationals worry about the risks of operating under the new rules. Lee bets that a booming economy might put minds at ease.US Government information: What's the threshold for "classified"?
There are many reasons for a government to classify information. The US does not want Vladimir Putin getting his hands on our nuclear codes, for example. An estimated 50 million documents are classified every year, though the exact number is unknown—not because it’s classified, but because the government just can’t keep track of it all. But in the words of the former US Solicitor General Erwin Griswold, some “secrets are not worth keeping.”
This week on GZERO World, former Congresswoman Jane Harman argues that America has had for decades an over-classification problem. Harman mentions the findings by the 9/11 Commission, which concluded that a lack of information-sharing between agencies like the CIA, the FBI, and the NSA prevented the US government from foiling the largest terrorist attack ever on American soil. A key reason for that failure: the over-classification of information.
It’s difficult for Americans to understand the actions of their government if much of its work is classified. It also forces journalists to weigh the risks of disclosing information to the public against the possibility of prosecution under the Espionage Act.
Beyond national security concerns, over-classification is also driven by incentives. If you’re a government employee, the risk of classifying something that doesn’t need to be classified is low. But if you un-classify something that you shouldn’t, you're in trouble.
Tune in to “GZERO World with Ian Bremmer” on US public television to watch the full interview. Check local listings.
Is it time for the US government to rethink how it keeps its secrets?
Here’s one of the United States' worst-kept secrets: its flawed classification process. Whether it’s the unnecessary classification of material or the storage of top-secret documents behind a flimsy shower curtain in a Mar-a-Lago bathroom, it’s crucial to address our approach to confidentiality. Joining GZERO World to discuss all things classified, including those documents in Trump’s bathroom, is former Congresswoman Jane Harman. As the ranking member of the House Intelligence Committee after 9/11, the nine-term congresswoman has insider knowledge of the matter.
According to Harman, “The only good reason to classify documents is to protect our sources and methods, how we got information.” The 9/11 Commission identified a lack of information-sharing among agencies such as the CIA, the FBI, and NSA as a key reason the government was unable to stop the attacks. Over-classification of information played a significant role in this failure. Approximately 50 million documents are estimated to be classified each year, although the exact number remains unknown—not due to classification, but because the government struggles to keep track of it all. In the words of former US Solicitor General Erwin Griswold, some “secrets are not worth keeping.”
To see the full interview with Jane Harman, watch GZERO World with Ian Bremmer at gzeromedia.com/gzeroworld or on US public television. Check local listings.
The next global superpower?
Ian Bremmer's Quick Take: Hi everybody. Ian Bremmer here. A Quick Take for you and my Ted Talk has just landed. So yes, that is what I want to talk about. Kind of, what happens after the GZERO? Who is the next global superpower? Do the Americans come back? Is it the Chinese century? No, it's none of the above. We don't have superpowers anymore. And that's what the talk is all about.
I think that the geopolitical landscape today unnerves people because there's so much conflict, there's so much instability. People see that the trajectory of US-China relations, of war in Europe, of the state of democracy and globalization, all is heading in ways that seem both negative and unsustainable. And part of the reason for that is because it is not geopolitics as usual. It's not the Soviets or the Americans or the Chinese that are driving outcomes in the geopolitical space. Rather it is breaking up into different global orders depending on the type of power we're talking about.
There's a security order of course, and people that think that international institutions and governance doesn't work anymore, aren't focusing on hard security cause NATO is expanding, and getting stronger, and involving not just the Nordics, but also the Japanese, and the South Koreans, and the Australians. The Americans are building out the Quad and they're building out AUKUS, in part because of growing consensus on Russia among the advanced industrial democracies, growing concerns about China. But then also you have at the same time that the US-led national security institutions are getting stronger, the global economic architecture is fragmenting and it's becoming more competitive. And the Europeans are driving some rules, and the Chinese are driving others, and the Americans are driving others. No one's really happy about that, and it's becoming less efficient, and that's because it's a multilateral economic order at the same time as it's a unilateral unipolar security order.
And those are two things that we kind of feel right now, and it's not super comfortable. It's not super stable. The pieces move and they rub up against each other. The Americans trying to have more dominance in certain areas of the economy. When you can make it about national security, like if you talk about critical minerals and transition energy economies or semiconductors, for example, you see all that investment moving away from Taiwan and towards the US, the Netherlands, Japan, other countries. And you can see other areas where the Chinese have more influence in commercial ties and are getting more diplomacy oriented towards them in the Global South, for example, in the BRICS, and now France saying they want to go to BRICS meeting and that's not about national security, that's about economic integration. So these things, they're like tectonic plates and they don't align comfortably. And when they don't and when they move, sometimes you get an earthquake, sometimes you get a tsunami.
But then you have a global digital order. And the digital order, at least today, has no global institutions, has no real domestic regulatory structure and it's dominated by a small number of individuals that run tech companies. It's Meta, and it's Google, and it's Microsoft, and it's Elon and Twitter, and you know, it's individuals and tech companies. And these companies right now are devoting almost all of their time, almost all of their money, almost all of their labor towards getting there first, wherever there is, making sure that they're not going to be made bankrupt or undermined or creatively destructed, if you will, by their competitors, whether that's in China or whether that's, you know, sort of just a few miles down the road in the Valley or someplace else. And because that's the entire focus, or virtually the entire focus, and because the governments are behind and there's no international architecture, it means that at least for the next few years, the digital order is gonna be dominated by technology companies, and the geopolitics of the digital order will be dominated by the decision making of a very small number of individuals. And understanding that I think is the most important and most uncertain outcome geopolitically.
I'll tell you that if I could wave a magic wand, the one thing that I would want to have happen is I want these AI algorithms to not be distributed to young people, to children. If there's one thing I could do right now across the world, just snap my fingers, wave a wand and that regulation would be in place. Because, you know, when I was a kid, and we were all kids, right, except for the kids that are watching this, it was, you know, how you grew up was about nature and nurture. That's who you were. Emotionally, it's who you were intellectually, it's how you thought about the world. It's how your parents raised you, how your family raised you, your community raised you, and also your genetics. But increasingly today it's about algorithms. It's about how you interact with people through your digital interface that's becoming increasingly immersive. And the fact that that is being driven by algorithms that are being tested on people real time. I mean, you don't test vaccines on people real time even in a pandemic until you've actually gotten approvals and done proper testing. You don't test GMO food on people until you've done testing. And yet you test algorithms on people and children real time. And the testing that you're doing is AB testing to see which is more addictive, you know, which actually you can more effectively productize, how you can make more money, how you can get more attention, more eyeballs, more data from people. And I think particularly with young people whose, you know, minds are going to be so affected by the way they are steered, by the way they are raised, and by the way they are raised by these algorithms, we've gotta stop that.
I think the Chinese actually understand that better than the West does. And you know, it's interesting, you go to Washington, you say, "What do you think we can learn from the Chinese?" Not a question that they get asked very often. It's a useful one since they're the second largest economy and they're growing really fast. I would say when they decided that they were going to put caps on video games for kids, that was one that I remember, everyone I knew who was a parent of a teenager said, "I wouldn't mind that happening in the United States." Something like that on new algorithms, social media and AI for young people, I would get completely behind. And I hope that's something we can do.
But there are a lot of issues here, huge opportunities that come from AI, massive amount of productivity gains in healthcare, in longevity, in agriculture, in new energy development, in every aspect of science, and we'll get there because there's huge amounts of money, and sweat equity, and talent that is oriented towards doing nothing but that. But the disruptive negative implications of testing those things on 8 billion people on the planet, or anyone I should say, who's, you know, connected to a smartphone or to a computer, so more than 50% of the planet, that is not something we're taking care of and we're gonna pay the cost of that.
So anyway, you have just heard some of my TED Talk and what I think the implications are, I hope you'll check out the whole thing and I look forward to talking to you all real soon.
TikTok "boom"! Could the US ban the app?
As a person over 40, the first thing I did when I heard about a new bipartisan US bill that could lead to a ban of TikTok was: call my niece Valeria in Miami.
She’s a high school sophomore who spends a lot of time on TikTok.
“People are hypnotized by it,” she told me, estimating she spends up to two hours a day on the app, even when she deliberately erases it from her phone during school hours. And during the dog days of summer, she says, some of her friends will tap in for more than eight hours daily.
On this particular day, the two top vids in her feed were: a physics teacher driving a homemade rocket-powered scooter through his classroom to the soundtrack of rapper Ace Hood‘s hit record “I woke up in a Bugatti,” and a mesmerizing vid of a woman applying fine lines of wax to a Pysanka egg.
This is the sort of algorithmic catnip that has won TikTok more than 100 million users in the US alone.
But the new bill moving through Congress could end all of that. The RESTRICT Act would expand the president’s power to ban apps or hardware made by companies based in countries that Washington considers “adversaries.” While the bill doesn’t mention any companies by name, Chinese-owned TikTok is widely understood to be its fattest target.
US lawmakers have already banned TikTok on government devices – but the new bill would permit the president to scrap the app from everyone else’s phones too.
Why do people want to ban TikTok? Supporters of a ban say the app, which records loads of personal data about its users, is a national security risk. After all, what’s to stop the Chinese government from demanding TikTok hand over all that data on ordinary Americans’ locations, obsessions, and contacts? (Answer: nothing – it’s a one-party state.)
What’s more, the platform’s vast reach has raised concerns that Beijing – or its friends – could use TikTok for propaganda or influence operations meant to mess with American politics.
My niece, for her part, says that while the political and privacy issues don’t come up much among her friends, she does worry about this problem of disinformation. “TikTok has a false sense of credibility. If you wanted to get a large number of people to believe something that was not true,” she says, “TikTok could be useful for that.”
TikTok says it recognizes the concerns, but points out that it has already negotiated solutions to these issues with the US. Those reportedly include creating a special US-based oversight board for its content and transferring US users’ data to servers run by American companies.
Opponents of a ban have strong arguments too.
For one thing, a big legal fight could await.
“There are important First Amendment concerns,” says Anupam Chander, a scholar of international tech regulation at Georgetown Law School. “TikTok is an enormous speech platform, one that millions of Americans depend on on a daily basis.”
Supporters of a ban disagree, arguing that a ban on the company isn’t a ban on speech itself. But the issue would almost certainly wind up before the courts before long.
At the same time, banning TikTok could provoke a backlash at home. While polls show a majority of Americans support a ban, Democrats are far less keen than Republicans, and younger voters – TikTok’s primary users – are evenly split over the issue.
There’s a global angle too. Banning TikTok or forcing it to house its data in the US could set a precedent that comes back to hurt global American firms, according to Caitlin Chin, a tech regulation expert at CSIS.
“The U.S. economy depends on cross-border data flows,” she says. “If the United States starts banning companies based on their corporate ownership or their country of origin, this could encourage other countries to do the same.”
But perhaps the biggest issue, says Chin, is that banning TikTok wouldn’t really address the specific concerns that TikTok’s critics have raised.
Thousands of American companies already sell data to brokers who can pass it on to hostile governments, she points out. And as we’ve seen, US-based social media platforms are hardly immune to spreading disinformation themselves. For Chin, the problem is more basic.
“The US has very outdated and fragmented regulations when it comes to both data privacy and content moderation. Banning TikTok is not actually going to solve our data privacy or surveillance or propaganda problems.”
As for Valeria and her friends, banning TikTok might not be the worst thing, she says.