The debate around the US banning TikTok is a proxy for a larger question: How safe are democracies from high-tech threats, especially from places like China and Russia?
There are genuine concerns about the integrity of elections. What are the threats out there and what can be done about it? No one understands this issue better than Chris Krebs. Krebs is best known as the former director of the US Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency.
In a high-profile showdown, Donald Trump fired Krebs in November 2020, after CISA publicly affirmed that the election was among the “most secure in history” and that the allegations of election corruption were flat-out wrong. Since then, Krebs has become the chief public policy officer at SentinelOne and cochairs the Aspen Institute’s U.S. Cybersecurity Working Group, and he remains at the forefront of the cyber threat world.
GZERO Publisher Evan Solomon spoke to him this week about what we should expect in this volatile election year.
Solomon: How would you compare the cyber threat landscape now to the election four years ago? Have the rapid advances in AI made a material difference?
Chris Krebs: The general threat environment related to elections tracks against the broader cyber threat environment. The difference here is that beyond just pure technical attacks on election systems, election infrastructure, and on campaigns themselves, we have a parallel threat of information operations, and influence operations —what we more broadly call disinformation.
This has picked up almost exponentially since 2016, when the Russians, as detailed in the Intelligence Community Assessment of January 2017, showed that you can get into the middle of domestic elections and pour kerosene on that conversation. That means it jumps into the real world, potentially even culminating in political violence like we saw on Jan. 6.
We saw the Iranians follow the lead in 2020. The intelligence community released another report in December that talked about how the Chinese attempted to influence the 2022 elections. We've seen the Russians are active too through a group we track called Doppelganger, specifically targeting the debate around the border and immigration in the US.
Solomon: When you say Doppelganger is “active,” what exactly does that mean in real terms?
Krebs: They use synthetic personas or take over existing personas that have some element of credibility and jump into the online discourse. They also use Pink Slime websites, which is basically fake media, and then get picked up through social media and move over to traditional media. They are taking existing divides and amplifying the discontent.
Solomon: Does it have a material impact on, say, election results?
Krebs: I was at an event back in 2019, and a former governor came up to me as we were talking about prepping for the 2020 election and said: “Hey, everything you just talked about sounds like opposition research, typical electioneering, and hijinks.”
And you know what? That's not totally wrong. But there is a difference.
Rather than just being normal domestic politics, now we have a foreign security service that's inserting itself in driving discourse domestically. And that's where there are tools that the intelligence services here in the US as well as our allies in the West have the ability to go in and disrupt.
They can get onto foreign networks and say, “Hey, I know that account right there. I am able to determine that the account which is pushing this narrative is controlled by the Russian security services, and we can do something with that.”
But here is the key: Once you have a social media influencer here in the US that picks up that narrative and runs with it, well, now, it's effectively fair game. It's part of the conversation, First Amendment protected.
Solomon: Let's move to the other side. What do you do about it without violating the privacy and free speech civil liberties of citizens?
Krebs: This is really the political question of the day. In fact, just last week there was a Supreme Court hearing on Murthy v. Missouri that gets to this question of government and platforms working together. (Editor’s note: The case hinges on whether the government’s efforts to combat misinformation online around elections and COVID constitute a form of censorship). Based on my read, the Supreme Court was largely being dismissive of Missouri and Louisiana's arguments in that case. But we'll see what happens.
I think the bigger issue is that there is this broader conflict, particularly with China, and it is a hot cyber war. Cyber war from their military doctrine has a technical leg and there's a psychological leg. And as we see it, there are a number of different approaches.
For example, India has outlawed and banned hundreds of Chinese origin apps, including WeChat and TikTok and a few others. The US has been much more discreet in combating Chinese technology. The recent actions by the US Congress and the House of Representatives are much more focused on getting the foreign control piece out of the conversation and requiring divestitures.
Solomon: Chris, what’s the biggest cyber threat to the elections?
Krebs: Based on my conversations with law enforcement and the national security community, the number one request that they're getting from election officials isn't on the cyber side. It isn't on the disinformation side. It's on physical threats to election workers. We're talking about doxing, we're talking about swatting, we're talking about people physically intimidating at the polls and at offices. And this is resulting in election officials resigning and quitting and not showing up.
How do we protect those real American heroes who are making sure that we get to follow through on our civic duty of voting and elections? If those election workers aren't there, it's going to be a lot harder for you and me to get out there and vote.
Solomon: What is your biggest concern about AI technology galloping ahead of regulations?
Krebs: Here in the United States, I'm not too worried about regulation getting in front of AI. When you look at the recent AI executive order out of the Biden administration, it's about transparency and even the threshold they set for compute power and operations is about four times higher than the most advanced publicly available generative AI. And even if you cross that threshold, the most you have to do is tell the government that you're building or training that model and show safety and red teaming results, which hardly seems onerous to me.
The Europeans are taking a different approach, more of a regulate first, ask questions later, which I think is going to limit some of their ability to truly be at the bleeding edge of AI.
But I'll tell you this: We are using AI and cybersecurity to a much greater effect and impact than the bad guys right now. The best they can do right now is use it for social engineering, for writing better phishing emails, for some research, and for functionality. We are not seeing credible reports of AI being used to write new innovative malware. But in the meantime, we are giving tools that are AI powered to the threat hunters that have really advanced capabilities to go find bad stuff, to improve configurations, and ultimately take the security operations piece and supercharge it.