Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Podcast: Can governments protect us from dangerous software bugs?
Listen: We've probably all felt the slight annoyance at prompts we receive to update our devices. But these updates deliver vital patches to our software, protecting us from bad actors. Governments around the world are increasingly interested in monitoring when dangerous bugs are discovered as a means to protect citizens. But would such regulation have the intended effect?
In season 2, episode 5 of Patching the System, we focus on the international system of bringing peace and security online. In this episode, we look at how software vulnerabilities are discovered and reported, what government regulators can and can't do, and the strength of a coordinated disclosure process, among other solutions.
Our participants are:
- Dustin Childs, Head of Threat Awareness at the Zero Day Initiative at Trend Micro
- Serge Droz from the Forum of Incident Response and Security Teams (FIRST)
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
TRANSCRIPT: Can governments protect us from dangerous software bugs?
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
DUSTIN CHILDS: The industry needs to do better than what they have been doing in the past, but it's never going to be a situation where they ship perfect code, at least not with our current way of developing software.
SERGE DROZ: I think the job of the government is to create an environment in which responsible vulnerability disclosure is actually possible and is also something that's desirable.
ALI WYNE: If you've ever gotten a notification pop up on your phone or computer saying that an update is urgently needed, you've probably felt that twinge of inconvenience at having to wait for a download or restart your device. But what you might not always think about is that these software updates can also deliver patches to your system, a process that is in fact where this podcast series gets its name.
Today, we'll talk about vulnerabilities that we all face in a world of increasing interconnectedness.
Welcome to Patching the System, a special podcast from the Global Stage Series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from more than 150 global technology companies dedicated to creating a safer cyber world for all of us.
And about those vulnerabilities that I mentioned before, we're talking specifically about the vulnerabilities in the wide range of IT products that we use, which can be entry points for malicious actors. And governments around the world are increasingly interested in knowing about these software vulnerabilities when they're discovered.
Since 2021 for example, China has required that anytime such software vulnerabilities are discovered, they first be reported to a government ministry even before the company that makes a technology is alerted to the issue. In the European Union, less stringent, but similar legislation is pending, that would require companies that discover that a software vulnerability has been exploited to report the information to government agencies within 24 hours and also provide information on any mitigation use to correct the issue.
These policy trends have raised concerns from technology companies and incident responders that such policies could actually undermine security.
Joining us today to delve into these trends and explain why are Dustin Childs, Head of Threat Awareness at the Zero Day Initiative at Trend Micro, a cybersecurity firm base in Japan, and Serge Droz from the Forum of Incident Response and Security Teams, AKA First, a community of IT security teams that respond when there's a major cyber crisis. Dustin, Serge, welcome to you both.
DUSTIN CHILDS: Hello. Thanks for having me.
SERGE DROZ: Hi. Thanks for having me.
ALI WYNE: It's great to be talking with both of you today. Dustin, let me kick off the conversation with you. And I tried in my introductory remarks to give listeners a quick glimpse as to what it is that we're talking about here, but give us some more detail. What exactly do we mean by vulnerabilities in this context and where did they originate?
DUSTIN CHILDS: Well, vulnerability, really when you break it down, it's a flaw in software that could allow a threat actor to potentially compromise a target, and that's a fancy way of saying it's a bug. They originate in humans because humans are imperfect and they make imperfect code, so there's no software in the world that is completely bug free, at least none that we've been able to generate so far. So every product, every program given enough time and resources can be compromised because they all have bugs, they all have vulnerabilities in them. Now, vulnerability doesn't necessarily mean that it can be exploited, but a vulnerability is something within a piece of software that potentially can be exploited by a threat actor, a bad guy.
ALI WYNE: And Serge, when we're talking about the stakes here, obviously vulnerabilities can create cracks in the foundation that lead to cybersecurity incidents or attacks. What does it take for a software vulnerability to become weaponized?
SERGE DROZ: Well, that really depends on the particular vulnerability. A couple of years ago, there was a vulnerability that was really super easy to exploit: log4j. It was something that everybody could do in an afternoon, and that of course, is a really big risk. If something like that gets public before it's fixed, we really have a big problem. Other vulnerabilities are much harder to exploit also because software vendors, in particular operating system vendors have invested a great deal in making it hard to exploit vulnerabilities on their systems. The easy ones are getting rarer, mostly because operating system companies are building countermeasures that makes it hard to exploit these. Others are a lot harder and need specialists, and that's why they fetch such a high price. So there is no general answer, but the trend is it's getting harder, which is a good thing.
ALI WYNE: And Dustin, let me come back to you then. So who might discover these vulnerabilities first and what kinds of phenomena make them more likely to become a major security risk? And give us a sense of the timeline between when a vulnerability is discovered and when a so-called bad actor can actually start exploiting it in a serious way.
DUSTIN CHILDS: The people who are discovering these are across the board. They're everyone from lone researchers just looking at things to nation states, really reverse engineering programs for their own purposes. So a lot of different people are looking at bugs, and it could be you just stumble across it too and it's like, "Oh, hey. Look, it's a bug. I should report this."
So there's a lot of different people who are finding bugs. Not all of them are monetizing their research. Some people just report it. Some people will find a bug and want to get paid in one way or another, and that's what I do, is I help them with that.
But then once it gets reported, depending on what industry you're in, it's usually like 120 days to up to a year until it gets fixed from the vendor. But if a threat actor finds it, they can weaponize it and it can be weaponized, they can do that within 48 hours. So even if a patch is available and that patch is well-known, the bad guys can take that patch and reverse engineer it and turn it into an exploit within 48 hours and start spreading. So within 30 days of a patch being made available, widespread exploitation is not uncommon if a bug can be exploited.
ALI WYNE: Wow. So 48 hours, that doesn't give folks much time to respond, but thank you, Dustin, for giving us that number. I think we now have at least some sense of the problem, the scale of the problem, and we'll talk about prevention and solutions in a bit. But first, Serge, I want to come back to you. I want to go into some more detail about the reporting process. What are the best practices in terms of reporting these vulnerabilities that we've been discussing today? I mean, suppose if I were to discover a software vulnerability for example, what should I do?
SERGE DROZ: This is a really good question, and there's still a lot of ongoing debate, even though the principles are actually quite clear. If you find a vulnerability, your first step should be to actually start informing confidentially the vendor, whoever is responsible for the software product.
But that actually sounds easier than it is because quite often it's maybe hard to talk to a vendor. There's still some companies out there that don't talk to ‘hackers,’ in inverted commas. That's really bad practice. In this case, I recommend that you contact a national agency that you trust that can mediate in between you, and that's all fairly easy to do if it's just between you and another party, but then you have a lot of vulnerabilities in products for no one is really responsible, take open source or products that actually are used in all the other products.
So we talking about supply chain issues and then things really become messy. And in these cases, I really recommend that people start working together with someone who's experienced in doing coordinated vulnerability disclosure. Quite often what happens is that within the industry affected organizations get together, they form a working group that silently starts mitigating this spec practices, that you give the vendor three months or more to actually be able to fix a bug because sometimes it's not that easy. What you really should not be doing is leaking any kind of information, like even saying, "Hey, I have found the vulnerability in product X," it may actually trigger someone to start looking at this. So this is really important that this remains a confidential process where very few people are involved.
ALI WYNE: So one popular method of uncovering these vulnerabilities that we've been discussing, it involves, so-called bug bounty programs. What are bug bounty programs? Are they a good tool for catching and reporting these vulnerabilities, and then moving beyond bug bounty programs, are there other tools that work when it comes to reporting vulnerabilities?
SERGE DROZ: Bug bounty programs are just one of the tools we have in our tool chest to actually find vulnerabilities. The idea behind a bounty program is that you have a lot of researchers that actually poke at code just because they may be interested, and at the company or a producer of software, you offer them a bounty, some money. If they report a vulnerability responsibly, you pay them some money usually depending on how severe or how dangerous the vulnerability is and encourage good behavior this way. I think it's a really great way because it actually creates a lot of diversity. Typically, bug bounty programs attract a lot of different types of researchers. So we have different ways of looking at your code and that often discovers vulnerabilities that no one has ever thought of because no one really had that way of thought, so I think it's a really good thing.
It also awards people that responsibly disclose and don't just sell it to the highest bidder because we do have companies out there that buy vulnerabilities that then end up in some strange gray market, exactly what we don't want, so I think that's a really good thing. Bug bounty programs are complimentary to what we call penetration testing, where you hire a company that for money, starts looking at your software. There's no guarantee that they find a bug, but they usually have a systematic way of going over this and you have an agreement. As I said, I don't think there's a single silver bullet, a single way to make this, but I think this is a great way to actually also reward this. And some of the bug bounty researchers make a lot of money. They actually make a living of that. If you're really good, you can make a decent amount of money.
DUSTIN CHILDS: Yeah, and let me just add on to that as someone who runs a bug bounty program. There are a couple of different types of bug bounty programs too, and the most common one is the vendor specific one. So Microsoft buys Microsoft bugs, Apple buys Apple bugs, Google buys Google bugs. Then there's the ones that are like us. We're vendor-agnostic. We buy Microsoft and Apple and Google and Dell and everything else pretty much in between.
And one of the biggest things that we do as a vendor-agnostic program is an individual researcher might not have a lot of sway when they contact a big vendor like a Microsoft or a Google, but if they come through a program like ours or other vendor-agnostic programs out there, they know that they have the weight of the Zero Day Initiative or that program behind it, so when the vendor receives that report, they know it's already been vetted by a program and it's already been looked at. So it's a little bit like giving them a big brother that they can take to the schoolyard and say, "Show me where the software hurt you," and then we can help step in for that.
ALI WYNE: And Dustin, you've told us what bug bounty programs are. Why would someone want to participate in that program?
DUSTIN CHILDS: Well, researchers have a lot of different motivations, whether it's curiosity or just trying to get stuff fixed, but it turns out money is a very big motivator pretty much across the spectrum. We all have bills to pay, and a bug bounty program is a way to get something fixed and earn potentially a large amount of money depending on the type of bug that you have. The bugs I deal with range anywhere between $150 on the very low end, up to $15 million for the most severe zero click iPhone exploits being purchased by government type of thing, so there's all points in between too. So it's potentially lucrative if you find the right types of bugs, and we do have people who are exclusively bug hunters throughout the year and they make a pretty good living at it.
ALI WYNE: Duly noted. So maybe I'm playing a little bit of a devil's advocate here, but if vulnerabilities, these cyber vulnerabilities, if they usually arise from errors in code or other technology mistakes from companies, aren't they principally a matter of industry responsibility? And wouldn't the best prevention just be to regulate software development more tightly and avoid these mistakes from getting out into the world in the first place?
DUSTIN CHILDS: Oh, you used the R word. Regulation, that's a big word in this industry. So obviously it's less expensive to fix bugs in software before it ships than after it ships. So yes, obviously it's better to fix these bugs before they reach the public. However, that's not really realistic because like I said, every software has bugs and you could spend a lifetime testing and testing and testing and never root them all out and then never ship a product. So the industry right now is definitely looking to ship product. Can they do a better job? I certainly think they can. I spent a lot of money buying bugs and some of them I'm like, "Ooh, that's a silly bug that should never have left wherever shipped at." So absolutely, the industry needs to do better than what they have been doing in the past, but it's never going to be a situation where they ship perfect code, at least not with our current way of developing software.
ALI WYNE: Obviously there isn't any silver bullet when it comes to managing these vulnerabilities, disclosing these vulnerabilities. So assuming that we probably can't eliminate all of them, how should organizations deal with fixing these issues when they're discovered? And is there some kind of coordinated vulnerability disclosure process that organizations should follow?
DUSTIN CHILDS: There is a coordinated disclosure process. I mean, I've been in this industry for 25 years and dealing with vulnerability disclosures since 2008 personally, so this is a well-known process where you report to it. As an industry if you're developing software, one of the most important things you can do is make sure you have a contact. If someone finds a bug in your program, who do they email? The more established programs like Microsoft and Apple and Google, it's very clear if you find a bug there who you're supposed to email and what you're supposed to do with it. One of the problems we have as a bug bounty program is if we purchase a bug in a lesser known piece of software, sometimes it's hard for us to hunt down who actually is responsible for maintaining it and updating it.
We've even had to go on to Twitter and LinkedIn to try and hunt down some people to respond to an email to say, "Hey, we've got a bug in your program," so that's one of the biggest things you can do is just be aware that somebody could report a bug to you. And as a consumer of the product, however, you need a patch management program. So you can't just rely on automatic updates. You can't just rely on things happening automatically or easily. You need to understand first what is in your environment, so you have to be ruthless in your asset discovery, and I do use the word ruthless there intentionally. You've got to know what is in your enterprise to be able to defend it, and then you've got to have a plan for managing it and patching it. That's a lot easier said than done, especially in a modern enterprise where not only do you have desktops and laptops, you've got IT devices, you've got IOT devices, you've got thermostats, you've got update, you've got little screens everywhere that need updating and they all have to be included in that patch management process.
ALI WYNE: Serge, when it comes to triaging vulnerabilities, it doesn't sound like there's a large need for government participation. So what are some of the reasons legitimate and maybe less than legitimate why governments might increasingly want to be notified about vulnerabilities even before patches are available? What are their motivations?
SERGE DROZ: So I think there are several different motivations that governments are getting increasingly fed up with these kind of excuses that our industry, the software industry makes about how hard it is to avoid software vulnerabilities, all the reasons and excuses we bring and for not doing our jobs. And frankly, as Dustin said, we could be doing better. Governments just want to know so they can actually give out the message that, "Hey, we're watching you and we want to make sure you do your job." Personally, I'm not really convinced this is going to work. So that will be mostly the legitimate reasons why the governments want to know about vulnerabilities. I think it's fair that the government knows or learns about the vulnerability after the fact, just to get an idea of what the risk is for the entire industry. Personally, I feel it should only be the parties that need to know should know it during the responsible disclosure.
And then of course, there's governments that like vulnerabilities because they can abuse it themselves. I mean, governments are known to exploit vulnerabilities through their favorite three letter agencies. That's actually quite legitimate for governments to do. It's not illegal for governments to do this type of work, but of course, as a consumer or as an end user, I don't like this, I don't want products that have vulnerabilities that are exploited. And personally from a civil society point of view, there's just too much risk with this being out there. So my advice really is the fewer people, the few organizations know about a vulnerability the better.
DUSTIN CHILDS: What we've been talking about a lot so far is what we call coordinated disclosure, where the researcher and the vendor coordinate a response. When you start talking about governments though, you start talking about non-disclosure, and that's when people hold onto these bugs and don't report them to the vendor at all, and the reason they do that is so that they can use them exclusively. So that is one reason why governments hold onto these bugs and want to be notified is so that they have a chance to use them against their adversaries or against their own population before anyone else can use them or even before it gets fixed.
ALI WYNE: So the Cybersecurity Tech Accord had recently released a statement opposing the kinds of reporting requirements we've been discussing. From an industry perspective, what are the concerns when it comes to reporting on vulnerabilities to governments?
DUSTIN CHILDS: Really the biggest concern is making sure that we all have an equitable chance to get it fixed before it gets used. If a single government starts using vulnerabilities to exploit for their own personal gain, for whatever, that puts the rest of the world at a disadvantage, and that's the rest of the world, their allies as well as their opponents. So we want to do coordinated disclosure. We want to get the bugs fixed in a timely manner, and keeping them to themselves really discourages that. It discourages finding bugs, it discourages reporting bugs. It really discourages from vendors from fixing bugs too, because if the vendors know that the governments are just going to be using these bugs, they might get a phone call from their friendly neighborhood three letter and say, "You know what? Hold off on fixing that for a while." Again, it just puts us all at risk, and we saw this with Stuxnet.
Stuxnet was a tool that was developed by governments targeting another government. It was targeting Iranian nuclear facilities, and it did do damage to Iranian nuclear facilities, but it also did a lot of collateral damage throughout Europe as well, and that's what we're trying to avoid. It's like if it's a government on government thing, great, that's what governments do, but we're trying to minimize the collateral damage from everyone else who was hurt by this, and there really were a lot of other places that were impacted negatively from the Stuxnet virus.
ALI WYNE: And Serge, what would you say to someone who might respond to the concerns that Dustin has raised by saying, "Well, my government is advanced and capable enough to handle information about vulnerabilities responsibly and securely, so there's no issue or added risk in reporting to them." What would you say to that individual?
SERGE DROZ: The point is that there are certain things that really you only deal on a need to know basis. That's something that governments actually do know. Governments when they deal with confidential or critical information, it's always on the need to know. They don't tell this to every government employee even though they're, of course, are loyal. It makes the risk of this leaking even if the government doesn't have any ill intent bigger, so there's just no need the same way there is no need that all the other a hundred thousand security researchers need to know about this. So I think as long as you cannot contribute constructively to mitigating this vulnerability, you should not be part of that process.
Having said that, though, there is some governments that actually have really tried hard to help researchers making contact with vendors. Some researchers are afraid to report vulnerabilities because they feel they're going to become under pressure or stuff like this. So if a government wants to take that role and can or can't create enough trust that researchers trust them, I don't really have a problem, but it should not be mandatory. Trust needs to be earned. You cannot legislate this, and every time you have to legislate something, I mean, come on, you legislate it because people don't trust you.
ALI WYNE: We spent some time talking about vulnerabilities, why they're a problem. We've discussed some effective and maybe some not so effective ways to prevent or manage them better. And I think the governments have a legitimate interest in knowing the companies are acting responsibly and that, that interest is the impetus behind some of the push, at least for more regulation and reporting. But what do each of you see sees other ways that governments could help ensure that companies are mitigating risks and protecting consumers as much as possible?
DUSTIN CHILDS: So one of the things that we're involved with here at the Zero Day Initiative is encouraging governments to allow safe harbor. And really what that means is researchers are safe in reporting vulnerabilities to a vendor without the legal threat of being sued or having other action taken against them so that as long as they are legitimately reporting a bug and not trying to steal or violate laws, as long as they're legitimate researchers trying to get something fixed, they're able to do that without facing legal consequences.
One of the biggest things that we do as a bug bounty program is just handle the communications between researchers and the vendors, and that is really where it can get very contentious. So to me, one of the things that governments can do to help is make sure that safe harbor is allowed so that the researchers know that, "I can report this vulnerability to this vendor without getting in touch with a lawyer first. I'm just here trying to get something fixed. Maybe I'm trying to get paid as well," so maybe there is some monetary value in it, but really they're just trying to get something fixed, and they're not trying to extort anyone. They're not trying to create havoc, they're just trying to get a bug fixed, and that safe harbor would be very valuable for them. That's one thing we're working on with our government contacts, and I think it's a very big thing for the industry to assume as well.
SERGE DROZ: Yes, I concur with Dustin. I think the job of the government is to create an environment in which responsible vulnerability disclosure is actually possible and is also something that's desirable, that also includes a regulatory framework that actually gets away from this blaming. I mean, writing software is hard, bugs appear. If you just constantly keep bashing people that they're not doing it right or you threaten them with liabilities, they're not going to talk to you about these types of things. So I think the job of the government is to encourage responsible behavior and to create an environment in that, and maybe there's always going to be a couple of black sheeps, and here maybe the role of the government is really to encourage them to play along and start offering vulnerability reporting programs. That's where I see the role of the government, creating good governance to actually enable responsible vulnerabilities disclosure.
ALI WYNE: Dustin Childs, Head of Threat Awareness at the Zero Day Initiative at Trend Micro, a cybersecurity firm base in Japan. And Serge Droz from the Forum of Incident Response and Security Teams, a community of IT security teams that respond when there is a major cyber crisis. Dustin, Serge, thanks very much for joining me today.
DUSTIN CHILDS: You're very welcome. Thank you for having me.
SERGE DROZ: Yes, same here. It was a pleasure.
ALI WYNE: That's it for this episode of Patching the System. We have five episodes this season covering everything from cyber mercenaries to a cybercrime treaty. So follow Ian Bremmer's GZERO World feed anywhere you get your podcast to hear more. I'm Ali Wyne. Thanks very much for listening.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
Podcast: Would the proposed UN Cybercrime Treaty hurt more than it helps?
Listen: As the world of cybercrime continues to expand, it follows suit that more international legal standards should follow. But while many governments around the globe see a need for a cybercrime treaty to set a standard, a current proposal on the table at the United Nations is raising concerns among private companies and nonprofit organizations alike. There are fears it covers too broad a scope of crime and could fail to protect free speech and other human rights across borders while not actually having the intended effect of combatting cybercrime.
In season 2, episode 4 of Patching the System, we focus on the international system of online peace and security. In this episode, we hear about provisions currently included in the proposed Russia-sponsored UN cybercrime treaty as deliberations continue - and why they might cause more problems than they solve.
Our participants are:
- Nick Ashton-Hart, head of delegation to the Cybercrime Convention Negotiations for the Cybersecurity Tech Accord
- Katitza Rodriguez, policy director for global privacy at a civil society organization, the Electronic Frontier Foundation
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
TRANSCRIPT: Would the proposed UN Cybercrime Treaty hurt more than it helps?
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
NICK ASHTON HART: We want to actually see a result that improves the situation for real citizens that actually protects victims of real crimes and that doesn't allow //cybercrime to go unpunished. That's in no one's interest.
KATITZA RODRIGUEZ: By allowing countries to set their own standards of what constitutes a serious crime, the states are opening the door for authoritarian countries to misuse this treaty as a tool for persecution. The treaty needs to be critically examined and revised to ensure that it's truly served its purpose in tackling cybercrimes without undermining human rights.
ALI WYNE: It's difficult to overstate the growing impact of international cybercrime. Many of us either have been victims of criminal activity online or know someone who has been.
Cybercrime is also a big business, it's one of the top 10 risks highlighted in the World Economic Forum's 2023 Global Risk Report, and it's estimated that it could cost a world more than $10 trillion by 2025. Now, global challenges require global cooperation, but negotiations of a new UN Cybercrime Treaty have been complicated by questions around power, free speech and privacy online.
Welcome to Patching the System, a special podcast from the Global Stage series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from more than 150 global technology companies dedicated to creating a safer cyber world for all of us.
In this episode, we'll explore the current draft of what would be the first United Nations Cybercrime Treaty, the tense negotiations behind the scenes, and the stakes that governments and private companies have in those talks.
Last season we spoke about the UN Cybercrime Treaty negotiations when they were still relatively early on in the process. While they had been kicked off by a Russia-sponsored resolution that passed in 2019, there had been delays due to COVID-19.
In 2022, there was no working draft and member states were simply making proposals about what should be included in a cybercrime treaty, what kinds of criminal activity it should address, and what kinds of cooperation it should enable.
Here's Amy Hogan-Burney of the Microsoft Digital Crimes Unit speaking back then:
AMY HOGAN-BURNEY: There is a greater need for international cooperation because as cyber crime escalates, it’s clearly borderless and it clearly requires both public sector and the private sector to work on the problem. Although I am just not certain that I think that a new treaty will actually increase that cooperation. And I’m a little concerned that it might do more harm than good. And so, yes, we want to be able to go after cyber criminals across jurisdiction. But at the same time, we want to make sure that we’re protecting fundamental freedoms, always respectful of privacy and other things. Also, we’re always mindful of authoritarian states that may be using these negotiations to criminalize content or freedom of expression.
Now a lot has happened since then as we've moved from the abstract to the concrete. The chair of the UN Negotiating Committee released a first draft of the potential new cybercrime treaty last June, providing the first glimpse into what could be new international law and highlighting exactly what's at stake. The final draft is expected in November with the diplomatic conference to finalize the text starting in late January 2024.
Joining me are Nick Ashton-Hart, head of delegation to the Cybercrime Convention Negotiations for the Cybersecurity Tech Accord and Katitza Rodriguez, policy director for global privacy at a civil society organization, the Electronic Frontier Foundation. Thanks so much for speaking with me today.
KATITZA RODRIGUEZ: Thank you for inviting us.
NICK ASHTON-HART: It's a pleasure to be here.
ALI WYNE: Let's dive right into the cybercrime treaty. Now, this process started as a UN resolution sponsored by Russia and it was met early on by a lot of opposition from Western democracies, but there were also a lot of member states who genuinely thought that it was necessary to address cybercrime. So give us the broad strokes as to why we might want a cybercrime treaty?
NICK ASHTON-HART: The continuous expansion of cybercrime at an explosive growth rate is clearly a problem and one that the private sector would like to see more effectively addressed because of course, we're on the front lines of addressing it as victims of it. At one level it sounds like an obvious candidate for international action.
In reality, of course, there is the Budapest Convention on cybercrime, which was agreed in 2001. It is not just a convention that European countries can join, any member state can join. If there hadn't been any international convention, then you could see how it would be an obvious thing to work on.
This was controversial from the beginning because there is one and it's widely implemented. I think it's 68 countries, but 120 countries' laws have actually been impacted by the convention. There was also a question because of who was asking for it. This also raised more questions than answers.
KATITZA RODRIGUEZ: For us, civil society, I don't think the treaty is necessary because there are other international treaties, but I do understand why some states are trying to push for this treaty because they feel that their system for law enforcement cooperation is just too slow or not reliable. And they have argued that they have not been able to set up effective mutual legal assistance treaties, but we think the reasons fall short, especially because there are lot of these existing mechanisms include solid human rights safeguards, and when the existing mutual legal assistance treaty for international cooperation does not work well, we believe they can be improved and fixed.
And just let's be real, there are some times when not cooperating is actually the right thing to do, especially when criminal investigations could lead to prosecution of individuals for their political belief, their sexual or protection, gender identity or simply for speaking out of protesting peacefully or insulting the president or the king.
On top of that, this treaty as is stand now, might not even make the cybercrime cooperation process any faster. The negotiators are aiming for mandatory cooperation of almost all crimes on this planet and not just cybercrimes. This could end up bogging down the system even more.
ALI WYNE: Nick, let me just ask you, are there any specific aspects of a new global cybercrime treaty that you think could be genuinely helpful to citizens around the world?
NICK ASHTON-HART: Well, if for one it focused only on cybercrime, that would be the most fundamental issue. The current trajectory would have this convention address all crimes of any kind, which is clearly an ocean boiling exercise and creates many more problems than it solves. There are many developing countries who will say, as Katitza has noted, that they don't receive timely law enforcement cooperation through the present system because if you are not a part of the Budapest Convention, honestly you have to have a bilateral treaty relationship with every country that you want to have law enforcement cooperation with.
And clearly, every country negotiating a mutual legal assistance treaty with 193 others is not a recipe for an international system that's actually effective. That's where an instrument like this can come in and set a basic common set of standards so that all parties feel confident that the convention’s provisions will not be taken advantage of for unacceptable purposes.
ALI WYNE: Katitza, I want to bring you back into the conversation. On balance, what do you think of the draft of the treaty as it stands now as we approach the end of 2023?
KATITZA RODRIGUEZ: Honestly, I'm pretty worried. The last negotiation session in New York made it crystal clear that we're short of time and there is still a lot left undecided, especially on critical issues like defining the treaty scope and ensuring human rights are protected.
The treaty was supposed to tackle cybercrime, but it's morphing into something much broader, a general purpose surveillance tool that could apply to any crime, tech involvement or not, as long as there is digital evidence. We're extremely far from our original goal and opening a can of worms. I agree with Nick when he said that a treaty with a tight focus on just actual cybercrimes topped with solid human right protections could really make a difference. But sadly what we are seeing right now is very far from that.
Many countries are pushing for sweeping surveillance powers, hoping to access real-time location data and communication for a wide array of crimes with minimum legal safeguards, the check and balance to put limits to curb potential abuse of power. This is a big red flag for us.
On the international cooperation front, it's a bit of a free for all the treaty leaves it up to individual countries to set their own standards for privacy and human rights when using these surveillance powers in cross border investigations.
And we know that the standards of some countries are very far from minimal standards, yet every country that signs a treaty is expected to implement these cross-border cooperation powers. And here's where it gets really tricky. This sets a precedent for international cooperation on investigations, even into activities that might be considered criminal in one country but are actually forms of free expression. This includes laws against so-called fake news, peaceful protests, blasphemy, or expressing non-conforming sexual orientation or gender identity. These are matters of human rights.
ALI WYNE: Nick, from your perspective, what are the biggest concerns for industry right now with the text, with the negotiations as they're ongoing? What are the biggest concerns for industry and is there any major provision that you think is missing right now from the current text?
NICK ASHTON-HART: Firstly, I will say that industry actually agrees with everything you just heard from EFF. And that's one of the most striking things about this negotiation, is in more than 25 years of working in multilateral policy, I have never seen all NGOs saying the same thing to the extent that is the case in this negotiation. Across the board, we have the same concerns. We may emphasize some more than others or put a different level of emphasis on certain things, but we all agree comprehensively, I think, about the problems.
One thing that's very striking is this is a convention which is fundamentally about the sharing of personal information about real people between countries. There is no transparency at all at any point. In fact, the convention repeatedly says that all of these transfers of information should be kept secret.
This is the reality that they are talking about agreeing to, is a convention where countries globally share the personal information of citizens with no transparency at all. Ask yourself if that is a situation which isn't likely to be abused, because I think we know the answer. It's the old joke about you know who somebody is if you put them in a room and turn the lights off. Well, the lights are off and the light switch doesn't exist in this treaty.
And so that, to us, is simply invidious in 2024 that you would see that bearing the UN logo - it would be outrageous. And that's just the starting place. There's also provisions that would allow one country to ask another to seize the person of say a tech worker who is on holiday, or a government worker who is traveling that has access to passwords of secure systems, to seize that person and demand that that person turn over those codes with no reference back to their employer.
As Katitza has said, it also allows for countries to ask others to provide the location data and communication metadata about where a person is in real time along with real time access to their computer information. This is clearly subject to abuse, and we brought this up with some delegations and they said, "Well, but countries do this already, so do we have to worry about it?"
I just found that an astonishing level of cynicism: the fact that people abuse international law isn't an argument for trying to limit their ability to do it in this context. We have a fundamental disconnect where we're asking to trust all countries in the world to operate in the dark, in secret, forever and that that will work out well for human rights.
ALI WYNE: Katitza, let me bring you back into the conversation. You heard Nick's assessment. I'd like to ask you to react to that assessment and also to follow up with you, do you think that there are any critical provisions that need to be added to the current text of the draft treaty?
KATITZA RODRIGUEZ: Well, I agree on many of the points that Nick made. One, keeping a sharp focus on so-called cybercrimes, is not only crucial for protecting human rights, our point of view, but it's also key to making this whole cooperation work. We have got countries left and right pointing out the flaws in the current international cooperation mechanisms, saying they are too flawed, too complex. And yet here we are heading towards a treaty that could cover a limitless list of crimes. That's not just missing the point, it's setting us up for even more complexity when the goal should be working better together, easier to tackle this very serious crimes like ransomware attacks that we have seen everywhere lately.
There is a few things that are also very problematic that are more into the details. One is one that Nick mentioned, this provision that could be used to coerce individual engineers, people who have knowledge to be able to access systems, to compel them to bypass their own security measures or the measures of their own employees, without the company actually knowing and putting the engineer into trouble because it won't be able to tell their employer that they are working on behalf of the law enforcement. I think it's really Draconian, these provisions, and it's also very bad for security, for encryption, for keeping us more safe.
But there's another provision that is also very problematic for us. It's the one that on international cooperation too, when it mentions that states should share, "Items or data required for analysis of investigations." The way it's phrased, it is very vague and leaves room for a state's ability to share entire databases or artificial intelligence trainings data to be shared. This could include biometrics data, data that is very sensitive and it's a human rights minefield here. We have seen how biometric data, face and voice recognition can be used against protestors, minorities, journalists, and migrants in certain countries. This treaty shouldn't become a tool that facilitates such abuses on an international scale.
And we also know that Interpol, in the mix too, is developing this massive predictive analytic system fed by all sorts of data, but it will be also with information data provided by member states. The issue with predictive policing is that it's often pitched as unbiased since it's based on data and not personal data, but we know that's far from the truth. It's bound to disproportionately affect Black and other over-policed communities. The data feeds into these systems comes from a racially biased criminal punishment systems and arrests in Black neighborhoods are disproportionately high. Even without explicit racial information, the data is tainted.
One other one:Human rights safeguards in the treaty as Nick says, they're in secret and the negotiation, no transparency, we fully agreed on that, but they are very weak.
As it stands, the main human rights safeguards in the treaty don't even apply to the international co-operation chapter, which is a huge gap. It defers to national law, whatever national law says, and as I said before, for one country this is good and for others it's bad and that's really problematic.
ALI WYNE: Nick, in terms of the private sector and in terms of technology companies, what are the practical concerns when it comes to potential misuses or abuses of the treaty from the perspective specifically of the Cybersecurity Tech Accord?
NICK ASHTON-HART: In the list of criminal acts in the convention, at the present time, none of them actually require criminal intent, but that is not actually the case at the moment. The criminal acts are only defined as "Acts done intentionally without right." This opens the door for all kinds of abuses. For example, security researchers often attempt to break into systems in order to find defects that they can then notify the vendors of, so these can be fixed. This is a fundamentally important activity for the security of all systems globally. They are intentionally breaking into the system but not for a negative purpose, for an entirely positive one.
But the convention does not recognize how important it is not to criminalize security researchers. The Budapest Convention, by contrast, actually does this. It has very extensive notes on the implementation of the convention, which are a part of the ratification process, meaning countries should not only implement the exact text of the convention, but they should do so in a rule of law-based environment that does, among other things, protect security researchers.
We have consistently said to the member states, "You need to make clear that criminal intent is the standard." The irony here is this is actually not complicated because this is a fundamental concept of criminal law called mens rea, which says that with the exception of certain crimes like murder, for someone to be convicted, you have to find that they had criminal intent.
Without that, you have the security researchers’ problem. You also have the issue that whistleblowers are routinely providing information that they're providing without authorization, for example, to journalists or also to watchdog agencies of government. Those people would also fall foul of the convention as its currently written, as would journalists' sources, depending on the legal environment in which they're implemented. Like civil society, we have consistently pointed out these glaring omissions and yet no country including the developed Western countries that you would expect would seize upon this, none of them have moved to include protections for any of these situations.
I have to say that's one of the most disappointing things about this negotiation is so far most of the Western democracies are not acting to prevent abuses of this convention and they are resisting any efforts from all of us in civil society and the private sector urging them to take action and they're refusing to do so. There are two notable exceptions which is New Zealand and Canada, but the rest, frankly, are not very helpful.
Some of the other issues that we have is that it should be much clearer that if there's a conflict of law problem where a country asks for cooperation of a provider and the provider says to them, "Look, if we provide this information to you, it's coming from another jurisdiction and it would cause us to break the law in that jurisdiction." We have repeatedly said to the member states, "You need to provide for this situation because it happens routinely today and in such an instance it's up to the two cooperating states to work out between themselves how that data can be provided in a way that does not require the provider to break the law."
If you want to see more effective cooperation and more expeditious cooperation, you would want more safeguards, as Katitza has mentioned. There's a direct connection between how quickly cooperation requests go through and the level of safeguards and comfort with the legal system of the requesting and requested states.
Where a request goes through quickly, it's because the states both see that their legal systems are broadly compatible in terms of rights and the treatment of accused persons and appeals and the like. And so they not only see that the crimes are the same, called dual criminality, but that also the accused will be treated in a way that's broadly compatible with the home jurisdiction. And so there's a natural logic to saying, "Since we know this is the case, we should provide for this in here and ensure robust safeguards because that will produce the cooperation that everyone wants." Unfortunately, the opposite is the case. The cooperation elements continue to be weakened by poor safeguards.
ALI WYNE: I think that both of you have made clear that the stakes are very high for whether this treaty comes to pass, what will the final text be? What will the final provisions be? But just to put a fine point on it, are there concerns that this treaty could also set a precedent for future cybercrime legislation across jurisdictions? I can imagine this treaty serving as a north star in places that don't already have cybercrime laws in place, so Katitza, let me begin with you.
KATITZA RODRIGUEZ: Yes, your are concerns and indeed very valid and very pressing. By setting a precedent where broad intrusive surveillance tools are made available for an extensive range of crimes, we risk normalizing a global landscape where human rights are secondary to state surveillance and control. Law enforcement needs ensured access to data, but the check and balances and the safeguards is to ensure that we can differentiate between the good cops and the bad cops. The treaty provides a framework that could empower states to use the guise of cybercrime prevention to clamp down on activities that are protected under human right law.
And I think that this broad approach not only diverts valuable resources and attention away for tackling genuine cybercrimes, but also offers – and here to answer your question - an example for future legislation that could facilitate this repressive state's practice. It sends a message that this is acceptable to use invasive surveillance tools to gather evidence for any crime deemed serious by a particular country irrespective of the human rights implications. And that's wrong.
By allowing countries to set their own standards of what constitutes a serious crime, the states are opening the door for authoritarian countries to misuse this treaty as a tool for persecution. The treaty needs to be critically examined and revised to ensure that it's truly served its purpose in tackling cybercrimes without undermining human rights. The stakes are high and I know it's difficult, but we're talking about the UN and we're talking about the UN charter. The international community must work together to ensure that they can protect security and also fundamental rights.
NICK ASHTON-HART: I think Katitza has hit the nail on the head, and there's one particular element I'd like to add to this is something like 40% of the world's countries at the moment either do not have cybercrime legislation or are revising quite old cybercrime legislation. They are coming to this convention, they've told us this, they've coming to this convention because they believe this can be the forcing mechanism, the template that they can use in order to ensure that they get the cooperation that they're interested in.
So the normative impact of this convention would be far greater than in a different situation, for example, where there was already a substantial level of legislation globally and it had been in place in most countries for a long enough period for them to have a good baseline of experience in what actually works in prosecuting cybercrimes and what doesn't.
But we're not in that situation. We're pretty much in the opposite situation and so this convention will have a disproportionately high impact on legislation in many countries because with the technical assistance that will come with it, it'll be the template that is used. Knowing that that is the case, we should be even more conservative in what we ask this convention to do and even more careful to ensure that what we do will actually help prosecute real cybercrimes and not facilitate cooperation on other crimes.
This makes things even more concerning for the private sector because of this. We want to actually see a result that improves the situation for real citizens that actually protects victims of real crimes and that doesn't allow as is unfortunately the case here, even large-scale cybercrime to go unpunished. That's in no one's interest, but this convention will not actually help with that. At this point we would have to see it as net harmful to that objective, which is supposed to be a core objective.
ALI WYNE: We've discussed quite extensively the need for international agreements when it comes to cybercrime. We've also mentioned some of the concerns about the current deal on the table. Nick, what would you need to see to mitigate some of the concerns that you have about the current deal on the table?
NICK ASHTON-HART: The convention should be limited to the offenses that it contains. Its provisions should not be available for any other criminal activity or cooperation. That would be the starting place. The second thing would be to inscribe crimes that are self-evidently criminal through providing for mens rea in all the articles to avoid the problems with whistleblowers, and journalists and security researchers. There should be a separate commitment that the provisions of this convention do not apply to actors acting in good faith to secure systems such as those that have been described. There must be, we think, transparency. There is no excuse for a user not to be notified at the point that the offense for which their data was accessed has been adjudicated or the prosecution abandoned and that should be explicitly provided.
People have a right to know what governments are doing with their personal information. We think it should be much clearer what dual criminality is. It should be very straightforward that without dual criminality, no cooperation under the convention will take place so that requests go through more quickly. It's much more clear that it is basically the same crime in all the cooperating jurisdictions. I would say those were the most important.
ALI WYNE: Katitza, you get the last word. What would you need to see to mitigate some of the concerns that you've expressed in our conversation about the current draft text on the table?
KATITZA RODRIGUEZ: First of all, we need to rethink how we handle refusals for cross border investigations. The treaty is just too narrow here, offering barely any room to say no. Even when the request to cooperate violates, or is inconsistent with human rights law. We need to make dual criminality a must to invoke the international cooperation powers, as Nick says. This dual criminality principle is a safeguard. That means that if it is not a crime in both countries involved, the treaty shouldn't allow for any assistance. You also need clear mandatory human rights safeguards in all international cooperation, that are robust - with notification, transparency, oversight mechanisms. Countries need to actively think about potential human regulations before using these powers.
It also helps if we only allow cooperation for genuine cybercrimes like real core cybercrimes, and not just any crime involving a computer, or that is generating electronic evidence, which today even the electronic toaster could leave digital evidence.
I just want to conclude by saying actual cybercrime investigations are often highly sophisticated and there's a case to be made for an international effort focused on investigating those crimes, but including every crime under the sun in its scope and sorry, it's really a big problem.
This treaty fails to create that focus. The second thing it also fails to provide these safeguards for security researchers, which Nick explained. We’re fully agreed on that. Security researchers are the ones who make our systems safe. Criminalizing what they do and not providing the effective, safeguards, it really contradicts the core aim of the treaty, which is actually to make us more secure to fight cybercrime. So we need a treaty that it's narrow on the scope and protects human rights. The end result however, is a cybercrime treaty that may well do more to undermine cybersecurity than to help it.
ALI WYNE: A really thought-provoking note on a which to close. Nick Ashton-Hart, head of delegation to the cybercrime convention negotiations for the Cybersecurity Tech Accord and Katitza Rodriguez, policy director for global privacy at A Civil Society Organization, the Electronic Frontier Foundation. Nick, Katitza, thank you so much for speaking with me today.
NICK ASHTON-HART: Thanks very much. It's been a pleasure.
KATITZA RODRIGUEZ: Thanks for having me on. Muchas gracias. It was a pleasure.
ALI WYNE: That's it for this episode of Patching the System. Catch all of the episodes from this season, exploring topics such as cyber mercenaries and foreign influence operations by following Ian Bremmer's GZERO World feed anywhere you get your podcasts. I'm Ali Wyne, thanks for listening.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
- Podcast: Foreign influence, cyberspace, and geopolitics ›
- Podcast: Cyber mercenaries and the global surveillance-for-hire market ›
- Podcast: How cyber diplomacy is protecting the world from online threats ›
- Attacked by ransomware: The hospital network brought to a standstill by cybercriminals ›
- Hacked by Pegasus spyware: The human rights lawyer trying to free a princess ›
- Podcast: Can governments protect us from dangerous software bugs? - GZERO Media ›
Podcast: Foreign influence, cyberspace, and geopolitics
Listen: Thanks to advancing technology like artificial intelligence and deep fakes, governments can increasingly use the online world to spread misinformation and influence foreign citizens and governments - as well as citizens at home. At the same time, governments and private companies are working hard to detect these campaigns and protect against them while upholding ideals like free speech and privacy.
In season 2, episode 3 of Patching the System, we're focusing on the international system of bringing peace and security online. In this episode, we look at the world of foreign influence operations and how policymakers are adapting.
Our participants are:
- Teija Tiilikainen, Director of the European Center of Excellence for Countering Hybrid Threats
- Clint Watts, General Manager of the Microsoft Threat Analysis Center
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
TRANSCRIPT: Foreign Influence, Cyberspace, and Geopolitics
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
Teija Tiilikainen: What the malign actors are striving at is that they would like to see us starting to compromise our values, so question our own values. We should make sure that we are not going to the direction where the malign actors would want to steer us.
Clint Watts: From a technical perspective, influence operations are detected by people. Our work of detecting malign influence operations is really about a human problem powered by technology.
Ali Wyne: When people first heard this clip that circulated on social media, more than a few were confused, even shocked.
AI VIDEO: People might be surprised to hear me say this, but I actually like Ron DeSantis a lot. Yeah, I know. I'd say he's just the kind of guy this country needs, and I really mean that.
Ali Wyne: That was not Hillary Clinton. It was an AI-generated deepfake video, but it sounded so realistic that Reuters actually investigated it to prove that it was bogus. Could governments use techniques such as this one to spread false narratives in adversary nations or to influence the outcomes of elections? The answer is yes, and they already do. In fact, in a growing digital ecosystem, there are a wide range of ways in which governments can manipulate the information environment to push particular narratives.
Welcome to Patching the System, a special podcast from the Global Stage series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us. Today, we're looking at the growing threat of foreign influence operations, state-led efforts to misinform or distort information online.
Joining me now are Teija Tiilikainen, Director of the European Center of Excellence for Countering Hybrid Threats, and Clint Watts, General Manager of the Microsoft Threat Analysis Center. Teija, Clint, welcome to you both.
Teija Tiilikainen:Thank you.
Clint Watts: Thanks for having me.
Ali Wyne: Before we dive into the substance of our conversation, I want to give folks an overview of how both of your organizations fit into this broader landscape. So Clint, let me turn to you first. Could you quickly explain what it is that Microsoft's Threat Analysis Center does, what its purpose is, what your role is there, and how does its approach now differ from what Microsoft has done in the past to highlight threat actors?
Clint Watts: Our mission is to detect, assess and disrupt malign influence operations that affect Microsoft, its customers and democracies. And it's quite a bit different from really how Microsoft has handled it up until we joined. We were originally a group called Miburo and we had worked in disrupting malign influence operations until we were acquired by Microsoft about 15 months ago.
And the idea behind it is we can connect what's happening in the information environment with what's happening in cyberspace and really start to help improve the information integrity and ecosystem when it comes to authoritarian nations that are trying to do malign influence attacks. Every day, we're tracking Russia, Iran, and China worldwide in 13 languages in terms of the influence operations they do. That's a combination of websites, social media hack and leak operations is a particular specialty of ours where we work with the Microsoft Threat Intelligence Center. They see the cyberattacks and we see the alleged leaks or influence campaigns on social media, and we can put those together to do attribution about what different authoritarian countries are doing or trying to do to democracies worldwide.
Ali Wyne: Teija, I want to pose the same question to you because today might be the first that some of our listeners are hearing of your organization. What is the role of the European Center of Excellence for Countering Hybrid Threats?
Teija Tiilikainen: So this center of excellence is an intergovernmental body that has been established originally six years ago by nine governments from various EU and NATO countries, but today covers 35 governments. So we cover 35 countries, EU and NATO allies, all, plus that we cooperate closely with the European Union and NATO. Our task is more strategic, so we try to analyze the broad hybrid threat activity and with hybrid threats, we are referring to unconventional threat forms, election interference, attacks against critical infrastructures, manipulation of the information space, cyber and all of that. We create capacity, we create knowledge, information about these things and try to provide recommendations and share best practices among our governments about how to counter these threats, how to protect our societies.
Ali Wyne: We're going to be talking about foreign influence operations throughout our conversations, but just first, let's discuss hybrid conflicts such as what we're seeing and what we have seen so far in Ukraine. And I'm wondering how digital tactics and conflicts have evolved. Bring us up to speed as to where we are now when it comes to these digital tactics and conflicts and how quickly we've gotten there.
Teija Tiilikainen: So it's easier to start with our environment where the societies rely more and more on digital solutions, information technology that is a part of our societies. We built that to the benefit of our democracies and their economies. But in this deep conflict where we are, conflict that has many dimensions, one of them being the one between democracies and authoritarian states, that changes the role of our digital solutions and all of a sudden we see how they have started to be exploited against our security and stability. So it is about our reliance on critical infrastructures, there we have digital solutions, we have the whole information space, the cyber systems.
So this is really a new - strengthening new dimension in the conflicts worldwide where we talk more and more about the need to protect our vulnerabilities and the vulnerabilities more and more take place in the digital space, in the digital solutions. So this is a very kind of technological instrument, more and more requiring advanced solutions from our societies, from us, very different understanding about threats and risks. If we compare that with the kind of more traditional one where armed attacks, military operations used to form the threat number one, and now it is about the resilience of our critical infrastructures, it is about the security and safety of our information solutions. So the world looks very different and ongoing conflicts do that as well.
Ali Wyne: And that word, Teija, that you use, resilience, I suspect that we're going to be revisiting that word and that idea quite a bit throughout our conversation. Clint, let me turn to you now and ask how foreign influence operations in your experience, how have they evolved online? How are they conducted differently today? Are there generic approaches or do different countries execute these operations differently?
Clint Watts: So to set the scene for where this started, I think the point to look at is the Arab Spring. When it came to influence operations, the Arab Spring, Anonymous, Occupy Wall Street, lots of different political movements, they all occurred roughly at the same time. We often forget that. That's because social media allowed people to come together around ideas, organize, mobilize, participate in different activities. That was very significant, I think, for nation states, but one in particular, which was Russia, which was a little bit thrown off by, let's say, the Arab Spring and what happened in Egypt, for example. But at the same point intrigued about what would the ability to be able to go into a democracy and infiltrate them in the online environment and then start to pit them against each other.
That was the entire idea of their Cold War strategy known as active measures, which was to go into any nation, particularly in the United States or any set of alliances, infiltrate those audiences and win through the force of politics rather than the politics of force. You can't win on the battlefield, but you can win in their political systems and that was very, very difficult to do in the analog era. Fast-forward to the social media age, which we saw the Russians do, was take that same approach with overt media, fringe media or semi-covert websites that look like they came from the country that was being targeted. And then combine that with covert troll accounts on social media that look like and talk like the target audience and then add the layer that they could do that no one else had really put together, which was cyberattacks, stealing people's information and timing the leaks of information to drive people's perceptions.
That is what really started about 10 years ago, and our team picked up on it very early in January 2014 around the conflict in Syria. They had already been doing it in the conflict in Ukraine 10 years ago, and then we watched it move towards the elections: Brexit first, the U.S. election, then the French and German elections. This is that 2015, '16, '17 period.
What's evolved since is everyone recognizing the power of information and how to use it and authoritarians looking to grab onto that power. So Iran has been quite prolific in it. They have some limitations, they have resource issues, but they still do some pretty complex information attacks on audiences. And now China is the real game changer. We just released our first East Asia report where we diagrammed what we saw as an incredible scaling of operations and centralization of it.
So looking at how to defend, I think it's remarkable that, like Teija mentioned, in Europe, a lot of countries that are actually quite small, Lithuania, for example, has been able to mobilize their citizens in a very organized way to help as part of the state's defense, come together with a strategy, network people to spot disinformation and refute it if it was coming from Russia.
In other parts of the world though, it's been much, much tougher, particularly even the United States where we've seen the Russians and other countries now infiltrate into audiences and you're trying to figure out how to build a coherent system around defending when you have a plurality of views and identities and different politics. It's actually somewhat more difficult, I think, the larger a country is to defend, particularly with democracies.
And then the other thing is you've seen the resilience and rebirth of alliances and not just on battlefields like NATO, but you see NATO, the EU, the Hybrid CoE, you see these groups organizing together to come through with definitions around terminology, what is a harm to democracy, and then how best to combat it. So it's been an interesting transition I would say over the last six years and you’re starting to see a lot of organization in terms of how democracies are going to defend themselves moving forward.
Ali Wyne: Teija, I want to come back to you. What impact have foreign influence operations had in the context of the war in Ukraine, both inside Ukraine but also globally?
Teija Tiilikainen: I think this is a full-scale war also in the sense that it is very much a war about narratives. So it is about whose story, whose narrative is winning this war. I must say that Russia has been quite successful with its own narrative if we think about how supportive the Russian domestic population still is with respect to the war and the role of the regime, of course there are other instruments also used.
Prior to the war started, Russia started to promote a very false story about what is going on in Ukraine. There was the argument about an ongoing Nazification of Ukraine. There was another argument about a genocide of the Russian minorities in Ukraine that was supposed to take place. And there was also a narrative about how Ukraine had become a tool in the toolbox of the Western alliance that is NATO or for the U.S. to exert its influence and how it was also used offensively against Russia. And these were of course all parts of the Russian information campaign – disinformation - with which it justified – legitimized - its war.
If we take a look at the information space in Europe or more broadly in Africa, for instance, today we see that the Western narrative about the real causes of the war, how Russia violated international law, the integrity and sovereignty of Ukraine, how this kind of real fact-based narrative is not doing that well. This proves the strength of foreign influence operations when they are strategic, well-planned, and of course when they are used by actors such as Russia and China that tend to cooperate and also outside the war zone, China is using this Russian narrative to put the blame for the war on the West and present itself as a reliable international actor.
So there were many elements in the war, not only the military activity, but I would in particular want to emphasize the role of these information operations.
Ali Wyne: It's sobering not only thinking about the impact of these disinformation operations, these foreign influence operations, but also, Teija, you mentioned the ways in which disinformation actors are learning from one another and I imagine that that trend is going to grow even more pronounced in the years and decades ahead. So thank you for that answer. Clint, from a technical perspective, what goes into recognizing information operations? What goes into investigating information operations and ultimately identifying who's responsible?
Clint Watts: I think one of the ironies of our work is, from a technical perspective, influence operations are detected by people. One of the big differences, especially we work with MSTIC, the cyber team and our team, is that our work of detecting malign influence operations is really about a human problem powered by technology. And that if you want to be able to understand and get your lead, we work more like a newspaper in many ways. We have a beat that we're covering, let's say, it's Russian influence operations in Africa. And we have real humans, people with master's degrees speak the language. I think the team speaks 13 languages in total amongst 28 of us. They sit and they watch and they get enmeshed in those communities and watch the discussions that are going on.
But ultimately we're using some technical skills, some data science to pick up those trends and patterns because the one thing that's true of influence operations across the board is you cannot influence and hide forever. Ultimately your position or, as Teija said, your narratives will track back to the authoritarian country - Russia, Iran, China - and what they're trying to achieve. And there's always tells common tells or context, words, phrases, sentences are used out of context. And then you can also look at the technical perspective. Nine years ago when we came onto the Russians, the number one technical indicator of Russian accounts was Moscow time versus U.S. time.
Ali Wyne: Interesting.
Clint Watts: They worked in shifts. They were posing as Americans, but talking at 2:00 AM mostly about Syria. And so it stuck out, right? That was a contextual thing. You move though from those human tips and insights, almost like a beat reporter though, to using technical tools. That's where we dive in. So that's everything from understanding associations of time zones, how different batches of accounts and social media might work in synchronization together, how they'll change from topic time and time again. The Russians are a classic of wanting to talk about Venezuela one day, Cuba the next, Syria the third day, the U.S. election the fourth, right? They move in sequence.
And so I think when we're watching people and training them, when they first come on board, it's always interesting. We try and pair them up in teams of three or four with a mix of skills. We have a very interdisciplinary set of teams. One will be very good in terms of understanding cybersecurity and technical aspects. Another one, a data scientist. All of them can speak a language and ultimately one is a former journalist or an international relations student that really understands the region and it's that team environment working together in person that really allows us to do that detection but then use more technical tools to do the attribution.
Ali Wyne: So You talked about identifying and attributing foreign influence operations and that's one matter, but how do you actually combat them? How do you combat those operations and what role, if any, can the technology industry play in combating them?
Clint Watts: So we're at a key spot, I think, in Microsoft in protecting the information environment because we do understand the technical signatures much better than any one government could probably do or should. There are lots of privacy considerations, we take it very serious at Microsoft, about maintaining customer privacy. At the same point, the role that tech can do is illustrative in some of our recent investigations, one of them where we found more than 30 websites, which were being run out of the same three IP addresses and they were all sharing content pushed from Beijing, but to local environments and local communities don't have any idea that those are actually Chinese state-sponsored websites.
So what we can do being part of the tech industry is confirm from a technical perspective that all of these things and all of this activity is linked together. I think that's particularly powerful. Also in terms of the cyber and influence convergence, we would say, we can see a leak operation where an elected official in one country is targeted as part of a foreign cyberattack for a hack and leak operation. We can see where the hack occurred. If we have good attribution on it, Russia and Iran in particular, we have very strong attribution on that and publish on it frequently, but then we can match that up with the leaks that we see coming out and where they come out from. And usually the first person to leak the information is in bed with the hacker that got the information. So that's another role that tech can play in particular about awareness of who the actors are, but what the connections are between one influence operation and one cyberattack and how that can change people's perspectives, let's say, going into an election.
Ali Wyne: Teija, I want to come back to you to ask you about a dilemma that I suspect that you and your colleagues and everyone who's operating in this space, a dilemma that I think many people are grappling with. And I want to put the question to you. I think that one of the central tensions in combating disinformation of course is preserving free speech in the nations where it exists. How should democracies approach that balancing act?
Teija Tiilikainen: This is a very good question and I think what we should keep in mind is what the malign actors are striving at is that they would like to see us starting to compromise our values, question our own values. So open society, freedom of speech, rule of law, democratic practices, and the principle of democracy, we should stick to our values, we should make sure that we are not going to the direction where the malign actors would want to steer us. But it is exactly as you formulate the question, how do we make sure that these values are not exploited against our broad societal security as is happening right now?
So of course there is not one single solution. The technological solution certainly can help us protect our society, broad awareness in society about these types of threats. Media literacies is the kind of keyword many times mentioned in this context. A totally new approach to the information space is needed and can be achieved through education, study programs, but also by supporting the quality media and the kind of media that relies on journalistic ethics. So we must make sure that our information environment is solid and that also in the future we'll have the possibility to make a distinction between disinformation and facts because it is - distinction is getting very blurred in a situation where there is a competition about narratives going on. Information has become a tool in many different conflicts that we have in the international space, but also in the domestic level many times.
I would like to offer the center's model because it's not only that we need cooperation between private actors, companies, civil society actors and governmental actors in states. We also need firm cooperation among like-minded states, sharing of best practices, learning. We can also learn from each other. If the malign actors do that, we should also take that model into use when it comes to questions such as how to counter, how to build resilience, what are the solutions we have created in our different societies? And this is why our center of excellence has been established exactly to provide a platform for that sharing of best practices and learning from each other.
So it is a very complicated environment in terms of our security and our resilience. So we need a multiple package of tools to protect ourselves, but I still want to stress the - our values and the very fact that this is what the malign actors would like to challenge and want us to challenge as well. So, let's stick to them.
Ali Wyne: Clint, let me come back to you. So we are heading into an electorally very consequential year. And perhaps I'm understating it, 2024 is going to be a huge election year, not only for the United States but also for many other countries where folks will be going to polls for the first time in the age of generative artificial intelligence. Does that fact concern you and how is artificial intelligence changing this game overall?
Clint Watts: Yeah, so I think it's too early for me to say as strange as it is. I remind our team, I didn't know what ChatGPT was a year ago, so I don't know that we know what AI will even be able to do a year from now.
Ali Wyne: Fair point, fair point.
Clint Watts: In the last two weeks, I've seen or experimented with so many different AI tools that I just don't know the impact yet. I need to think it through and watch a little bit more in terms of where things are going with it. But there are a few notes that I would say about elections and deepfakes or generative AI.
Since the invasion of Ukraine, we have seen very sophisticated fakes of both Zelensky and Putin and they haven't worked. Crowds, when they see those videos, they're pretty smart collectively about saying, "Oh, I've seen that background before. I've seen that face before. I know that person isn't where they're being staged at right now." So I think that is the importance of setting.
Public versus private I think is where we'll see harms in terms of AI. When people are alone and AI is used against them, let's say, a deepfake audio for a wire transfer, we're already seeing the damages of that, that's quite concerning. So I think from an election standpoint, you can start to look for it and what are some natural worries? Robocalls to me would be more worrisome really than a deepfake video that we tend to think about.
The other things about AI that I don't think get enough examination, at least from a media perspective, everyone thinks they'll see a politician say something. Your opening clip is example of it and it will fool audiences in a very dramatic way. But the powers of AI in terms of utility for influence operations is mostly about understanding audiences or be able to connect with an audience with a message and a messenger that is appropriate for that audience. And by that I mean creating messages that make more sense.
Part of the challenge for Russia and China in particular is always context. How do you look like an American or a European in their country? Well, you have to be able to speak the language well. That's one thing AI can help you with. Two, you have to look like the target audience to some degree. So you could make messengers now. But I think the bigger part is understanding the context and timing and making it seem appropriate. And those are all things where I think AI can be an advantage.
I would also note that here at Microsoft, my philosophy with the team is machines are good at detecting machines and people are good at detecting people. And so there are a lot of AI tools we're already using in cybersecurity, for example, with our copilots where we're using AI to detect AI and it's moving very quickly. As much as there's escalation on the AI side, there's also escalation on the defensive side. I'm just not sure that we've even seen all the tools that will be used one year from now.
Ali Wyne: Teija, let me just ask you about artificial intelligence more broadly. Do you think that it can be both a tool for combating disinformation and a weapon for promulgating disinformation? How do you view artificial intelligence broadly when it comes to the disinformation challenge?
Teija Tiilikainen: I see a lot of risks. I do see also possibilities and artificial intelligence certainly can be used as resilience tools. But the question is more about who is faster if the malign actors take the full advantage of the AI before we find the loopholes and possible vulnerabilities. I think it's very much about a hardcore question to our democracies. The day when an external actor can interfere efficiently into our democratic processes, to elections, to election campaigns, the very day when we cannot any longer be sure that what is happening in that framework is domestically driven, that they will be very dangerous for the whole democratic model, the whole functioning of our Western democracies.
And we are approaching the day and AI is, as Clint explained, one possible tool for malign actors who want to discredit not only the model, but also interfere into the democratic processes, affect outcomes of elections, topics of elections. So deepfakes and all the solutions that use AI, they are so much more efficient, so much faster, they are able to use so much - lots of data.
So I see unlimited possibilities unfortunately for the use of AI for malign purposes. So this is what we should focus on today when we focus on resilience and the resilience of our digital systems.
And this is also a highly unregulated field also at the international level. So if we think about weapons, if we think about military force, well, now we are in a situation of deep conflict, but before we were there we used to have agreements and treaties and conventions between states that regulated the use of weapons. Those agreements are no longer in a very good shape. But what do we have in the realm of cyber? This is at the international level a highly unregulated field. So there are many problems. So can only encourage and stress the need to identify the risks with these solutions. And of course we need to have regulation of AI solutions and systems in our states at the state level as well as hopefully at some point also international agreements concerning the use of AI.
Ali Wyne: I want to close by emphasizing that human component and ask you, as we look ahead and as we think about ways in which governments and private sector actors, individuals, and others in this ecosystem can be more effective at combating disinformation foreign influence operations, what kinds of societal changes need to happen to neutralize the impact of these operations? So talk to us a little bit more about the human element of this challenge and what kinds of changes need to happen at the societal level.
Teija Tiilikainen: I would say that we need a cultural change. We need to understand societal security very differently. We need to understand the risks and threats against societal security in a different way. And this is about education. This is about schools, this is about study programs at universities. This is about openness in media, about risks and threats.
But also in those countries that do not have the tradition. In the Nordic countries, here in Finland, in Scandinavia, we have a firm tradition of public-private cooperation when it comes to security policy. We are small nations and the geopolitical region has been unstable for a long time. So there is a need for public and private actors to share a same understanding of security threats and also cooperate to find common solutions. And I think I can only stress the importance of public-private cooperation in this environment.
We need more systematical forms of resilience. We have to ask ourselves what does resilience mean? Where do we start building resilience? Which are all the necessary components of resilience that we need to take into account? So that we have international elements, we have national elements, local elements, we have governmental and civil society parts, and they are all interlinked. There is no safe space anywhere. We need to kind of create comprehensive solutions that cover all possible vulnerabilities. So I would say that the security culture needs to be changed and it's not the security culture we tend to think about domestic threats and then international threats. Now they are part of the same picture. We tended to think about military, nonmilitary, also they are very much interlinked in this new technological environment. So new types of thinking, new types of culture. I would like to get back to university schools and try to engage experts to think about the components of this new culture.
Ali Wyne: Teija Tiilikainen, Director of the European Center of Excellence for Countering Hybrid Threats. Clint Watts, General Manager of Microsoft Threat Analysis Center. Teija, Clint, thank you both very much for being here.
Teija Tiilikainen: Thank you. It was a pleasure. Thank you.
Clint Watts: Thanks for having me.
Ali Wyne: And that's it for this episode of Patching the System. There are more to come. So follow Ian Bremmer's GZERO World feed anywhere you get your podcasts to hear the rest of our new season. I'm Ali Wyne. Thank you very much for listening.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
- Podcast: Cyber mercenaries and the global surveillance-for-hire market ›
- Podcast: How cyber diplomacy is protecting the world from online threats ›
- Attacked by ransomware: The hospital network brought to a standstill by cybercriminals ›
- Hacked by Pegasus spyware: The human rights lawyer trying to free a princess ›
- The threat of CEO fraud and one NGO's resilient response ›
- Why privacy is priceless - GZERO Media ›
- Would the proposed UN Cybercrime Treaty hurt more than it helps? - GZERO Media ›
- Podcast: Can governments protect us from dangerous software bugs? - GZERO Media ›
- Podcast: Cyber Mercenaries and the digital “wild west" - GZERO Media ›
Podcast: Cyber mercenaries and the global surveillance-for-hire market
Listen: The use of mercenaries is nothing new in kinetic warfare, but they are becoming a growing threat in cyberspace as well. The weapon of choice for cyber mercenaries is malicious spyware that undermines otherwise benign technologies and can be sold for profit. Luckily, awareness about this threat is also growing, and increasing global coordination efforts are being put forth to combat this dangerous trend.
In episode 2, season 2 of Patching the System, we're focusing on the international system of bringing peace and security online. In this episode, we look at what governments and private enterprises are doing to combat the growth of the cyber mercenary industry.
Our participants are:
- Eric Wenger, senior Director for Technology Policy at Cisco
- Stéphane Duguin, CEO of the CyberPeace Institute
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
TRANSCRIPT: Cyber mercenaries and the global surveillance-for-hire market
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
Eric Wenger: There's no phishing or fooling of the user into installing something on their device. This technology is so powerful that it can overcome the defenses on a device. So this is a tool that is on the level of sophistication with a military grade weapon and needs to be treated that way.
Stéphane Duguin: What we're facing is a multifaceted threat with a loose network of individuals, financiers, and companies which are playing a link in between states when it comes to a deployment of these surveillance capabilities. So if you want to curb this kind of threats, you need to act as a network.
Ali Wyne: In the ongoing war in Ukraine, both sides have employed mercenaries to supplement and fortify their own armies. Now, guns for hire are nothing new in kinetic warfare, but in cyberspace, mercenaries exist as well to augment government capabilities and their weapon of choice is malicious spyware that undermines peaceful technology, and which can be sold for profit. Today we'll enter the world of cyber mercenaries and the work that's being done to stop them.
Welcome to Patching The System, a special podcast from the Global Stage series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us. In this episode, we're looking at the latest in cyber mercenaries and what's being done to stop them. Last season we spoke to David Agranovich, director of Global Threat Disruption at Meta, about what exactly it is that cyber mercenaries do.
David Agranovich: These are private companies who are offering surveillance capabilities, which once were essentially the exclusive remit of nation state intelligence services, to any paying client. The global surveillance for hire industry, for example, targets people across the internet to collect intelligence, to try and manipulate them into revealing information about themselves and ultimately to try and compromise their devices, their accounts, steal their data.
Ali Wyne: And since then, awareness has grown and efforts to fight these groups have been fast tracked. In March of this year, the Tech Accord announced a set of principles specifically designed to curb the growth of the cyber mercenary market, which some estimate to be more than $12 billion globally. That same month, the White House issued an executive order to prohibit the U.S. government from using commercial spyware that could put national security at risk, an important piece of this cyber mercenary ecosystem.
On the other side of the Atlantic, a European Parliament committee finalized a report on the use of spyware on the continent and made recommendations for regulating it. And most recently, bipartisan legislation was introduced in the United States to prohibit assistance to foreign governments that use commercial spyware to target American citizens.
Are all of these coordinated efforts enough to stop the growth of this industry? Today I'm joined by Eric Wenger, senior Director for Technology Policy at Cisco, and Stéphane Duguin, CEO of the CyberPeace Institute. Welcome to you both.
Eric Wenger: Thank you.
Stéphane Duguin: Thank you.
Ali Wyne: Now, I mentioned this point briefly in the introduction, but I'd love to hear more from both of you about specific examples of what it is that cyber mercenaries are doing. What characterizes their work, especially from the latest threats that you've seen?
Stéphane Duguin: It's important maybe to start with a bit of definition of what are we talking about when we talk about cyber mercenaries. So interestingly, there is the official definition and what we all mean. Official definition - you can find this in the report to the general assembly of the United Nation, where it's really linked to private actors that can be engaged by states and non-state actors. It's really about the states taking action to engage someone, to contract someone in order to look into cyber operations in the context of an armed conflict.
I would argue that for this conversation, we need to look at the concept of cyber mercenaries wider and look at this as a network of individuals, of companies, of financial tools, of specific interest to at the end of the day, ensure global insecurity. Because all of this is about private sector entities providing their expertise, their time, their tool to governments to conduct clearly at scale an illegal, unethical surveillance. And to do this investment - money - needs to pour into a market, because it's a market which finances what? Global insecurity.
Eric Wenger: I would add that there's another layer to this problem that needs to be put into context, and that is, Stéphane correctly noted, that these are private sector entities and that their customers are governments that are engaged in some sort of activity that is couched in terms of protecting safety or national security. But the companies themselves are selling technology that is regulated and therefore is being licensed from a government as well too. I think that's really the fascinating dynamic here is that you have a private sector intermediary that is essentially involved in a transaction that is from one government to another government with that private sector actor in the middle being the creator of the technology, but it is subject to a license by one government for a sale to another government.
Ali Wyne: This market is obviously growing quickly, and I mentioned in my introductory remarks that $12 billion global figure, so obviously there's a lot of demand. From what you've seen, who are the customers and what's driving the growth of this industry?
Eric Wenger: Well, the concerning part of the story is that there have been a number of high profile incidents that have indicated these technologies are being used not just to protect against attacks on a nation, but in order to benefit the stability of a regime. And in that context, what you see are journalists being the subject of the use of these technologies or dissidents, human rights activists. And that's the part that really strikes me as being quite disturbing. And it is frankly the hardest part of this problem to get at because as I noted before, if you have these private sector actors that are essentially acting as intermediaries between governments, then it's hard to have a lot of visibility from the outside of this market into what are the justifications that are enabling sales. Who is this technology going to? How is it being used and how is it potentially being checked in order to address the human rights concerns that I've flagged here?
Ali Wyne: Stéphane, let me come back to you. So you used to work in law enforcement and given your law enforcement background, one question that one might ask is why shouldn't governments be taking advantage of cyber mercenaries if they are making tools that help to, for example, track down terrorists or otherwise fight crime and improve national defense? Why shouldn't governments be taking advantage of them?
Stéphane Duguin: Something that is quite magical about law enforcement, it's about enforcing the law. And in this case, there's clear infringement all over the place. Let's look into the use cases that we know about. So when it comes to law, what kind of judicial activities have been undertaken after the use, sale or export of these kinds of tools? So there's this company, Amesys, which is now sued for complicity in acts of torture, over sales of surveillance technologies to Libya. You have these cases of dissident that has been arrested in Egypt in the context of the acquisition of the Predator tool. More recently we've seen what happened in Greece with this investigation around the surveillance of critics and opponents. And you can add an add on example. This has nothing to do with law enforcement.
So my experience in law enforcement is that you have a case, when you have a case, you have an oversight, a judicial oversight. I was lucky to work in law enforcement in Europe, so a democratic construct that goes under the oversight of parliament. Where is this construct where a private sector entity has free rein to research and develop, increase, export, exactly as was said before, in between state, a technology, which by the way is creating expertise within that same company for people that are going to sell this expertise left and right. Where is the oversight? And where are the rules that would put this into a normal law enforcement system?
And just to finish on this, I worked on investigating terrorist group and cyber gangs most of my career, and we can do cases, we can do very, very, very good cases. I would not admittedly say that the problem is about putting everyone under surveillance. The problem is more about investing resources in law enforcement and in the judicial system to make sure that when there's a case, there's accountability and redress and repair for victims. And these, do not need surveillance at scale.
Ali Wyne: Eric, Let me come back to you. So, I want to give folks who are listening, I want to give them a little bit of a sense of the size of the problem and to help put the size of the problem in perspective. So when we talk about cyber mercenaries, just how big is the threat from them and the organizations for which they work? And is that threat, is it just an annoyance or is it a real cause for concern? And who's most affected by the actions that they take?
Eric Wenger: We could talk about the size of the market and who is impacted by it. That's certainly part of the equation in trying to size the threat. But we also have to have a baseline understanding of what the technology is that we're talking about in order for people to appreciate why there's so much concern. And we're talking about exploits that can be sent from the deployer or the technology to a mobile device that's used by an individual or an organization without any action being taken by the user. There's nothing you have to click, there's nothing you have to accept. There's no phishing or fooling of the user into installing something on their device. This technology is so powerful that it can overcome the defenses on a device. And then that device is then completely compromised so that cameras can be turned on, files stored on the device can be accessed, microphones can be activated.
So this is a tool that is on the level of sophistication with a military grade weapon and needs to be treated that way. So the concern is the cutout of a private sector entity in between the government, and these are typically democratic governments that are licensing these technologies to other governments that wouldn't have the capabilities to develop these technologies on their own. And then once in their hands, it's difficult if not impossible, to make sure that they are used only within the bounds of whatever the original justification for it was.
So in theory you would say, let's say there was some concern about a terrorist operation that justified the access to this technology, which in that government's hands can be repurposed for other things that might be a temptation, which would include protecting of the stability of the regime by going after those who are critics or dissidents or journalists that are writing things that they view as being unhelpful to their ability to govern. And so those lines are very difficult to maintain with a technology that is so powerful that is in the hands of a government without the type of oversight that Stéphane was referencing before.
Ali Wyne: So Stéphane, let me come back to you. And just building off of the answer, Eric just gave, what groups and individuals are most at risk from this growing cyber mercenary market?
Stéphane Duguin: History showed that who has been targeted by the deployment of these tools and the activities of the cyber mercenaries are political opponents and journalists, human rights defenders, lawyer, government official, pro-democracy activists, opposition members, human right defenders and so on. So we are quite far from terrorists or organized crime, art criminals and the like.
And interestingly, it's not only that this profile of who is targeted gives a lot of information about the whole ethics and values that are underlying in this market ecosystem. But also what is concerning is that we know about this not from law enforcement or not from public sector entities which would investigate the misuse of these technologies and blow the whistle. We know about this thanks to the amazing work of a few organizations over the past a decade, like the Citizen Lab, Amnesty Tech who could track and demonstrate the usage, for example of FinFisher against pro-democracy activists in 2012, position members in 13, FinSpy afterwards, then it moved to Pegasus firm NSO.
Now we just have the whole explanation of what happened with the Predator. It's quite concerning that these activities that are at the core of abuse of human rights and of the most essential privacy are not only happening in the shadow as Eric was mentioning before, with a total asymmetry between the almost military grades of tools that is put in place and the little capacity for the target to defend themselves. And this is uncovered not by the people we entrust with our public services and enforcement of our rights, but by investigative groups, civil society, which are almost for a living now doing global investigation against the misuse of offensive cyber capabilities.
Ali Wyne: Your organization, the CyberPeace Institute, what is the CyberPeace Institute doing to combat these actors? And more broadly, what is the role of civil society in working to address this growing challenge of cyber mercenary actors?
Stéphane Duguin: What we're facing is a multifaceted threat with a loose network of individuals, financiers, companies which are playing a link in between states when it comes to a deployment of these surveillance capabilities. So if you want to curb this kind of threats, you need to act as a network. So the role of the CyberPeace Institute among other civil society organizations is to put all together the capable and the willing so that we can look at the whole range of issues we're facing.
One part of it is the research and development and deployment of these tools. The second part is the detection of their usage. Another part is looking into the policy landscape and informed policymaking and demonstrating that some policies has been violated, export control when it comes to the management of these tools. Another part of the work is about measuring the human harm of what these tools are leading to.
So we, for example, at the CyberPeace Institute cooperated with the development of the Digital Violence Platform, which is showing the human impacts, for example, the usage of Pegasus on individual. We also are in the lead in one of the working groups of the Paris Peace Forum. We need to bring a multi-stakeholder community in a maturity level to understand exactly what this threat is costing to society and what kind of action we could take all together.
And we notably last year in the World Economic Forum, joined forces with Access Now, the official high commissioner for human rights, Human Rights Watch, Amnesty International and the International Trade Union Confederation and Consumer International, to call for a moratorium on the usage of these tools until we have the certainty that they are researched, deployed, exported, used with the proper oversight because otherwise the check and balance cannot work.
Ali Wyne: And you just mentioned Pegasus spyware and that kind of software has been getting more and more attention, including from policymakers. So Eric, let me come back to you now. What kinds of actions are governments taking to curb this market?
Eric Wenger: So as I noted before that this is an interesting combination of technology, of private sector entities that are creating the technology, the regulators who are in the governments where those companies are located who control the sale of the technology, and then the technology consumers who are, again, as Stéphane noted, other governments. And so it's this interesting blend of private and public sector actors that's going to require some sort of coordinated approach that runs across both. And I think you're seeing action in both of those spheres. In terms of private sector companies, Cisco, my employer, joined together with a number of other companies filing a friend of the court or amicus brief in litigation that had been brought by what was then Facebook, now Meta, against a company that was deploying technology that had hacked into their WhatsApp software. And in that case we joined together with a number of other companies, I believe it was Microsoft and Dell and Apple and others who joined together in filing a brief in that case.
We of course come together under the umbrella of the Tech Accord and we can talk about the principles that we developed among the companies. I think there's 150 companies that joined ultimately in signing that document in agreement that we have concerns that there are things we want to do in a concerted way to try to get at this market so that it doesn't cause the kinds of impacts that Stéphane talked about before.
Again, there's clearly a strong government to government piece of this that needs to be taken on. And then Stéphane also noted the Paris Peace Forum, and that this topic of how to deal with spyware and cyber mercenaries is going to be on the agenda there, which again is important because this is a government led forum, but it's one where you also see private sector and civil society entities actively engaged. Stéphane also mentioned the important work that's being done by Citizen Lab. And then we have threat intelligence researchers at Cisco that operate under the brand of Talos.
These are some of the most effective threat intelligence researchers in the world, and they're really interested in this problem as well too, and starting to work with people who suspect that their devices may have been compromised in this way to take a look at them and to help them.
And then the companies that make the cell phones and operating systems, Google and Apple for instance, have been doing important work about detecting these kinds of changes to the devices and then providing notice to those whose devices may have been impacted in these ways so that they are aware and are able to try to take further defensive measures. It's really quite an active space and as we've discussed here several times, it's one that will only be really effectively taken on through a concerted effort that runs across the government and private sector space. And again, also with civil society as well too.
Ali Wyne: Talk to us a little bit about what technology companies can do to shut down this market?
Eric Wenger: Yeah, it was natural that this would grow out of the Tech Accord, which itself was a commitment by companies to protect their customers against attacks that misuse technology that are coming from the government space. There was a recognition among our companies that yes, some of this is clearly most effectively addressed at that government to government level with awareness that's being created by civil society. But this is also a problem that relates to the creation of technology and the companies that are engaged in these business models are procuring and using technology that could be coming from companies that find this business model to be highly problematic.
And so that's essentially what we did is we sat down as a group and started to talk about what is the part of the problem that technology and the access to technology potentially contributes that we have some ability to make a difference on. And then agreeing amongst ourselves that the steps that we might be able to take to limit the proliferation of this technology and the market and the companies that are engaging in this type of business. And then that coming together with the work that's being done at the government to government level, hopefully will make a significant dent in the size of this market.
Ali Wyne: Stéphane, let me come back to you as promised. Whether it's governments, whether it's technology companies, what kinds of actions can these actors take to shut down this cyber mercenary market?
Stéphane Duguin: Eric listed a lot of what is happening in this space and it's very exhaustive and it tells you how complex the answer is. We try to put this into a framework that what is expected from states is regulation first. So regulation meaning having the regulation but implementing the regulation. And under the word regulation, I would even put the norm discussion where there's non-binding norms that have been agreed between states and some of them could be leveraged and operationalized in order to prevent such a proliferation because that's what we're talking about.
Another type of regulation that could be way better implemented is the expert control. For example, in the European Union, we at CyberPeace Institute were discussing this in the context of the PEGA Committee, so this work from the EU parliament when it comes to looking into the lawfulness and ethic use of these kinds of tools.
But also when we add this multi-stakeholder approach for the EU Cyber Agora to discuss the problematic and clearly the expert control needs to be put at another level of operationalization, so regulation. Then need to mean capacity to litigate. So to give the space and the means to your apparatus that is in the business of litigation.
So today, what do we have? For example, executive from Amesys and Nexa Technologies that were indicted for complicity in torture; NSO group which is facing multiple lawsuits by mostly civil society and corporate plaintiffs in various countries, but that's clearly not enough.
So this should be not only coming from civil society, journalists, plaintiff, but we should see some investigative capacity from states, meaning law enforcement, looking into this kind of misuse. The other part is attribution, like public attribution on what is happening. So who are the actors, what are these companies, how this network are working?
So we can see over time how the regulation, the litigation is having an impact on the ecosystem. Otherwise, it's like emptying the ocean with a spoon. So I guess you know the great work done by the community, so we mentioning it before the Citizen Lab, the Amnesty Tech, Access Now, the work of tons of other organizations, I don't want to forget anyone, is not going to scale to a level if policy makers do not do their job, which is what is policymaking in the criminal context? It is reducing the space that you give to criminals. And today in this context for cyber mercenaries, the space is way too big. So I would say around this regulation, litigation and public attribution, it's kind of a roadmap for government.
Ali Wyne: Eric, let me come back to you. And you already mentioned in one of your earlier answers, you talked about these principles that the Tech Accord came out with recently, just a few months ago, in fact, to oppose a cyber mercenary industry. And talk to us a little bit more about what exactly those principles entail and what their intended impact is.
Eric Wenger: Sure. Stéphane also makes an important point around the context of what governments can do. Things like putting companies that are of concern on the entity list to restrict their ability to license technology that they might need in order to build the tools that they are selling. But coming back to where companies like those who joined the Tech Accord can make a difference. I noted that these principles build on the cybersecurity Tech Accord's, founding commitments which are about building strong defense into our products, not enabling offensive use of our technologies, capacity building, in other words, helping the ability of governments to do the work that they need to protect their citizens and working together across these different domains with the private sector, the civil society and governments. These particular principles are aimed at this specific problem. And the idea is that we will collectively try to work together to take steps countering the use of the products that will harm people, and we can identify ways that we can actively counter the market.
One of the ways that we mentioned before is the participation in litigation where that's the appropriate step. We're also investing in cybersecurity awareness to customers so that they have more understanding of this problem. There are tools that are being built by the companies that are developing the operating systems on mobile devices that can, if you're in a highly vulnerable group like you're a journalist or a human rights dissident or a lawyer working in an oppressive legal environment, there are more defensive modes that some of these phones now enable. And then we're working to, and this is an example of our companies working together and on our own to protect customers and users by building up the security capabilities of our devices and products.
And then finally, we thought, Stéphane mentioned his role in law enforcement before, I also was a computer crime prosecutor at the Department of Justice. And it's really important for those who are conducting legitimate lawful investigations to have clear understandings of the processes that are used by companies to handle valid legal requests for information. And so that we built that into this set of principles as well too, that we're committed to where there are legal and lawful pathways to get information from a company's lawful intercept, compulsory access tools and things like that, that we are transparent about how we operate in those spaces and we clearly communicate what our processes for handling those kinds of demands from governments as well too.
Ali Wyne: Final question for both of you. What is the single most important step that societies can take to stop the work of cyber mercenaries?
Stéphane Duguin: Eric opened it very, very well in the sense of what we see as the ambition and the partnership, the activities are deployed both by civil society, by cooperation, Tech Accord is an excellent example, in order to curb these threats. And interestingly, maybe it also came from the fact that there was not so much push on the government side to do something at scale against that threat. So clearly today, who represents society and the need for society in this context with pushing the ball, is civil society, cooperation, academia. And I would say now government are starting to get the size of the problem. Something that Eric mentioned, I would like to build on it because it's about society, what the values that we believe in society, there's a need for law enforcement and a lot of law enforcement and judiciary, they want to work in a lawful way. That's the vast majority, at least from the law enforcement that I can relate to when it comes to Europe, where I worked.
In this context, it's quite important that the framework is clear, the capacity are there, the resource are there, so that it doesn't give so much of a space for these cyber missionaries to impose themselves as the go-to platform, the place where solution can be engineered because there's nothing else out there. Something else, a society has to make a choice. Do we want to have such a market in proliferation without today, any check and balance, any oversight and it's just like the wild west of the surveillance? Or do we say stop at minimum to make a moratorium, to put in place some clear oversight processes, looking into what makes sense and what we can accept as a society before letting this go. And the last thing is to invest at best with the regulation that we're having, that we're going to have. This regulation, for example, now that under negotiation in the EU, like the AI Act or the Cyber Resilience Act or Cyber Solidarity Act, it would not take much to have this regulation also looking into not only what makes system insecure, but also who is trying to make system insecure.
Ali Wyne: Eric, let me come to you to close us out and put the same question to you. What is the single most important step that societies can take to stop the work of cyber mercenaries?
Eric Wenger: Well, I'd love to say it was one thing, but it really is going to be a combination of things that come together as one maybe. And that's really going to involve this dynamic where the governments that are regulating access to the market of this technology, the governments that are... It may not be reasonable to expect that the governments that want to consume this technology will come to the table, but certainly the governments that have control over the markets where the technology is being developed, working together. And so as Stéphane mentioned, the United States government, the French government, the UK government have really all been out in front on this.
Those governments and others that share the concerns coming together with the experts in the threat intelligence space in academia, in civil society, in companies, and then companies that supply technologies that are critical, foundational elements of the ability of companies who are developing these technologies to engage in the market, also have an important role to play. And I think that's what we're bringing to the equation for the first time.
So it's this combination of actors that are coming together, recognizing that it's a problem and agreeing that there's something that we all need to do together in order to take this on. It's really the only way that we can be effective at addressing the concerns that we've been discussing here today.
Ali Wyne: Eric Wenger, Senior Director for Technology Policy at Cisco. Stéphane Duguin, CEO of the CyberPeace Institute. Thank you both so much for speaking with me today.
Eric Wenger: Thank you for having us.
Ali Wyne: And that's it for this episode of Patching the System. There are more to come. So follow Ian Bremmer's GZERO World feed anywhere you get your podcast to hear the rest of this new season. I'm Ali Wyne. Thank you very much for listening.'
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
- Podcast: How cyber diplomacy is protecting the world from online threats ›
- Podcast: Cyber Mercenaries and the digital “wild west" ›
- Attacked by ransomware: The hospital network brought to a standstill by cybercriminals ›
- Hacked by Pegasus spyware: The human rights lawyer trying to free a princess ›
- The threat of CEO fraud and one NGO's resilient response ›
- Podcast: Foreign influence, cyberspace, and geopolitics - GZERO Media ›
- Why privacy is priceless - GZERO Media ›
- Would the proposed UN Cybercrime Treaty hurt more than it helps? - GZERO Media ›
- Podcast: Can governments protect us from dangerous software bugs? - GZERO Media ›
Podcast: How cyber diplomacy is protecting the world from online threats
Listen: Just as bank robbers have moved from physical banks to the online world, those fighting crime are also increasingly engaged in the digital realm. Enter the world of the cyber diplomat, a growing force in international relations specifically focused on creating a more just and safe cyberspace.
In season 2 of Patching the System, we're focusing on the international systems and organizations of bringing peace and security online. In this episode, we're discussing the role of cyber diplomats, the threats they are combatting, and how they work with public and private sectors to accomplish their goals.
Our participants are:
- Benedikt Wechsler, Switzerland's Ambassador for Digitization
- Kaja Ciglic, Senior Director of Digital Diplomacy at Microsoft.
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “ Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
TRANSCRIPT: How cyber diplomacy is protecting the world from online threats
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
BENEDIKT WECHSLER: We have to be aware that although we are so familiar with the cyber and digital world, it's still a new technology. And I think we don't have that much time to develop these organizations and rules as we had for the maritime or for the airspace.
KAJA CIGLIC: This situation is both terrifying and sort of deteriorating, I would say, at the same time. Part of the reason is because the technology's evolving so fast. Every time there is a new tool put on the market, it can, and someone will try and test it as a weapon.
ALI WYNE: It is hard to overstate just how much we rely on digital technology and connectivity in our daily lives, from the delivery of essential services, including drinking water and electricity, to how we work, pay our bills, get our news. Increasingly, it all depends on an ever-growing cyberspace. But as humanity's digital footprint grows, cyberspace is also growing as a domain of conflict where attacks have the potential to bring down a power grid, where rattle the stock market, or compromise the data and security of millions of people in just moments.
Got your attention? Well, good.
Welcome to the second season of Patching the System, a special podcast from the Global Stage Series, a partnership between GZRO Media and Microsoft. I'm Ali Wyne, a senior analyst with Eurasia Group.
Now, whether you're a policy expert, you're a curious coder, or you're just a listener wondering if your toaster is plotting global domination, this podcast is for you. Throughout this series, we're highlighting the work of the Cybersecurity Technology Accord, a public commitment for more than 150 global technology companies dedicated to creating a safer cyber world for all of us.
Last season, we tackled some of the more practical aspects of cybersecurity, including protecting the Internet of Things and combating hackings and ransomware attacks. This time around, we're going global, talking about peace and security online, talking about how the international system is trying to bring stability and make sense of this new domain of conflict.
Digital transformation is happening at unprecedented speeds: AI, anyone? And policy and regulation need to evolve to keep up with that reality. Meanwhile, there has been widespread use of cyber operations in Russia's invasion of Ukraine, the first large-scale example of hybrid warfare. Well, what are the rules?
Enter the cyber diplomat. An increasing number of nations have them: ambassadors who are assigned not to a country or a region, but instead, assigned to addressing a range of issues online that require international cooperation. Many of these officials are based in Silicon Valley, and the European Union just recently opened a digital diplomacy office in San Francisco.
Meanwhile, the United States named its first Cyber Ambassador, Nathaniel Fick, to the State Department just last year. Here he is at a Council on Foreign Relations event describing the work of his office”
NATHANIEL FICK: Part of the goal here was to bring in not only one person, but a group of people with other perspectives, outside perspectives, in order to build something new inside the department. It's as close to a startup as you're going to get in a large bureaucracy like the Department of State. I think one of our goals again is to is to really restore public private partnership to a substantive term.
ALI WYNE: What do cyber diplomats do? Why do we need them, and how do they interact with private sector companies around the world?
I'm talking about this subject with Benedikt Wechsler, Switzerland's Ambassador for Digitization. And Kaja Ciglic, Senior Director of Digital Diplomacy at Microsoft. Welcome to you both.
BENEDIKT WECHSLER: Pleasure to be here.
KAJA CIGLIC: Thank you for having us.
ALI WYNE: Ambassador, let me begin with you. What does an Ambassador for Digitization do, and why did Switzerland feel that it was necessary as a diplomatic position?
BENEDIKT WECHSLER: Diplomats – or diplomacy - is one of the oldest professions in the world actually. And when our new minister came in four years ago, he wanted to know what is the world going to look like in about eight, 10 years. And there came up a foreign policy vision, which of course stated also that the digital world, the cyberspace will be ever more important. And so an ambassador is sent abroad to promote and protect interests of its citizens, companies, but also promote an international order, which is conducive to serving the best interests to not only his own country but the whole world.
But we didn't have a structure who deals with the digital world because there are new partners, there are new power centers, there's new actors, but the same interests and values were at stake. So we decided to set up a division for digitalization, which means exactly to promote interests for the citizens, Swiss companies, values, and human rights as well in that new field and develop that with new partners. So that is our key mission.
ALI WYNE: When most other folks hear the word ambassador, we are thinking about an ambassador to country X. I think it's really exciting that we now have an ambassadorial position for this critical priority. So, run us through some of the most pressing problems that you're tackling in this new role for a very new remit.
BENEDIKT WECHSLER: Normally, we always have a little tendency to fix our ideas on problems and security and risks. So that, of course, is the underlying most important issue. This digital space, cyberspace, has to be a safe space, otherwise, people don't want to engage in such a space. That's also a change from the sort of physical world to the digital world. I just recently read a report that in, Denmark, there has been no bank robbery anymore because there's no more banks where you can get money out. But, of course, they moved to cyberspace.
ALI WYNE: Right. Right.
BENEDIKT WECHSLER: Another little parallel what we're trying to do is - back to the age of their railways, there was a problem that train coaches were moving from one country to another, but they didn't have a means to lock or unlock these train coaches. So a group of experts sat together in Bern and devised a special key, which is still in function today, which was able to lock and unlock these wagons and make train connections safer. So that is exactly what we now have to do in the cyberspace: to find these key of Bern or the key of Geneva or the keys of wherever to make the digital cyberspace safe and workable.
ALI WYNE: Kaja, I want to come to you next, and just building off of the ambassador's remarks. You've said previously that cyber diplomacy is different from other traditional forms of diplomacy because it's multi-stakeholder. What do you mean by multi-stakeholder in this particular context and explain why Microsoft has what it calls a digital diplomacy team?
KAJA CIGLIC: So if you think about our origins of the internet, the internet has always been, from the onset, governed and set up by groups that are not always government. It includes governments, but includes academia, includes the private sector, includes various representatives of the civil society. And as the internet grew and became an ever-present part of our lives, it has meant that its governance structures grew with it.
Of course, governments, states, are there to determine what the regulations are. But because it is global, because it's a little bit like the ambassador was saying about the railways. We need to find ways for the trains not to just safely unlock and unload but to go from one country to another on the same tracks is also something that had to be figured out. I think that is where we are in the online space at the moment.
And currently, the vast majority of it is run and operated by the private sector. And that's why we say it has to be a different conversation that includes all these other stakeholders - hence the word multi-stakeholders - not just governments, which is more where traditional diplomatic conversations have been.
And Microsoft has identified this area as an area of interest and an area of a priority, almost 10 years ago now, when we first started talking about, "Okay, we need clear rules, particularly for safety and security online." Because as all aspects of our life are moving online, we need to make sure that they can continue operating more or less unthreatened.
ALI WYNE: And we talked a lot about this subject last season. We're going to be talking a lot again about this subject this season. Tell us what the Cybersecurity Tech Accord is and tell us how it fits into this conversation?
KAJA CIGLIC: Yeah, the Cybersecurity Tech Accord is a group of companies that effectively came together in 2018. At that point, at the onset, it was just above 30, and now it's just over 150 companies from all sizes from everywhere around the world that all agree that we need to be having this conversation, and we need to be making advances on how to secure peace and stability online not just now but for future generations.
And so the group came together around four fundamental principles that all the companies are committed to strong defense of their customers. All companies are committed to not conducting offensive operations in cyberspace. All companies are committed to capacity building. So sharing knowledge and understanding and that all companies are committed to working with each other and also with others in the community, not just the private sector companies. But like I said earlier, civil society groups, academia, governments to try and advance these values and goals.
ALI WYNE: Ambassador, I feel like, every week now, we learn about some state-sponsored hacking or cyber-attacks. You think about air, land, sea - we at least have some clear rules for state behavior based on principles such as recognized borders, sovereign airspace, international waters. Do we have similar international expectations and/or obligations to be respected in cyberspace?
BENEDIKT WECHSLER: Sometimes, I think it's really, we have to be aware that although we are so familiar with the cyber and digital world, it's still a new technology. And when you look at air to sea, this has been decades. And also, there, the governance and all the rules and norms have evolved over time, over decades and years. And I think we don't have that much time to develop these organizations and rules as we had for the maritime or for the airspace. But I think we have to - as Kaja said, it's a new specific and especially multi-stakeholder world and space, and we have to take that into account.
So we cannot just set up a new organization like we did in the old days, and then we think, "Well, we'll sit together among states, and we will negotiate something, and then we'll have a good glass of wine, and then we hope that everybody is going to abide by these rules." No, I think we have to have a whole toolbox or maybe a Swiss army knife with all sort of adapted tools for the difference today. So for instance, we negotiated a classical cybercrime convention in Vienna. There's a process of the Open-Ended Working Group at the UN about responsible behavior in cyberspace where all these norms are being developed and where we have also civil society and companies, the private sector being involved.
Then we have dialogues, for instance, the Sino-European Cyber Dialogue with China and the European states. We have it also with the United States. And there we are sort of defining, "Okay, international humanitarian law, what is protection of basic infrastructure?" So we're getting there. And I think also very importantly is that we engage very closely with the private sector because there's the knowledge, there's the innovation, so that we can really develop smart rules.
And then, lastly, I think we have to embrace much more also, the scientific world, because in science, there's so much progress and innovation and foresight that we have to take into account because this is going to happen much faster that this will become a reality. We cannot see where AI is going without also involving the scientific world.
ALI WYNE: This is the second season of Patching the System, and it's amazing. When I look back on the episodes that we recorded last year, it's extraordinary how much science has progressed, how much technology has progressed. And I suspect that the rate of that scientific and technological innovation, it's only going to grow with each year, but just an observation to say how rapidly these scientific and technological domains are…
BENEDIKT WECHSLER: You're right, and I think everybody was surprised. It's amazing.
ALI WYNE: I posed this question to Kaja earlier that we think about cyber diplomacy as being different from what we might call more “traditional” forms of diplomacy. But, in traditional diplomacy, ambassador, we often think of governments in aligned groups. So we think, for example, of the NATO alliance for security or we think of countries that support free markets and free speech versus those that support more state control. Are there similar alignments when it comes to matters of cyber diplomacy?
BENEDIKT WECHSLER: Yes, definitely. I mean, that is no secret. I think you have like-minded countries also in the tech and the digital space because we want to see technology being an enabler for better reaching sustainable development goals, expanding freedom, strengthening human rights, and not undermining human rights. And of course, there are countries in the world who have a contrary view and position.
On the other hand, I think it's interesting to see in mankind and international relations, there have always been antagonistic situations. But still, there was always some agreement consensus on certain things that, as humanity, we have to stick together. And even, I mean, in the coldest times of the Cold War, there was collaboration in space between the U.S. and the Soviet Union.
And I think we also feel a little bit that with this digital world, the internet, nobody has really cut itself off this world because they know it's just too important. And we have to build on this common heritage and common base that other countries are pursuing other ways and using this technology. There are things in warfare that we decided we shouldn't do, and we should keep and stick to this and maybe develop it where needed, but especially keeping the commitments also in the online cyber world.
ALI WYNE: Kaja I want to bring you back into the conversation. So from Microsoft's point of view, what is your overall sense of the trend line when it comes to nation-state activity online? Are we moving more towards order? Are we moving more towards chaos? And what can industry, including companies such as Microsoft, what can industry do to support the kind of diplomacy that the ambassador has described to advance what you might call a rules-based international order as it were online?
KAJA CIGLIC: I would probably say it's a bit of both. This situation is both terrifying and sort of deteriorating, I would say, at the same time. Part of the reason is because the technology's evolving so fast.
ALI WYNE: Right.
KAJA CIGLIC: And so, as a result, it means that every time there is a new tool put on the market, so to say, it can, and someone will try and test it as a weapon. So I think that's the reality of human nature. And we are seeing that the deteriorating situation is also reflecting what's going on in the offline world, right. I think, at the moment, in terms of geopolitics, we're not in the best place that we have ever been, and that's reflected online. At the same time, I would say not everything is super bleak. As the ambassador was saying, we do have rules. We have international law. We have human rights commitments. We have International Humanitarian Law, and while we do see these being breached, they're not being breached by the vast majority of countries.
They're being breached by a very small minority of countries. And I think, increasingly, we're seeing states that believe in the values of international law, that believe in these commitments that have been made in other domains over the past hundreds of years as important to reinforce and support in the online world. And as a result, they’re calling out bad behavior, they're calling out breaches of international law, and that's a very positive development.
But that doesn't mean that we should be complacent, right. I think, increasingly, states are seeing cyber as the conflict. Increasingly, we're seeing the private sector developing tools and weapons that are being used for offensive purposes.
Cyber mercenaries are effectively a new market that has emerged over the past five or so years and is booming because there is such an appetite for those type of technologies by governments. And I think to your last question in terms of how the industry can help and support - some of it is just we can share what we see. The big companies, in particular, we are often the vector through which the other governments or the targets get attacked. They use Microsoft systems. They operate on a cloud platform. So we see both the actors, we see the trends, and we see the emerging new techniques. And I think that's important for sort of the foreign policy experts around the world to be aware of, understand, and be able to act upon.
ALI WYNE: Ambassador, I want to come back to you. How do you think that the tech sector, in particular, should engage with the work of cyber diplomats such as yourself?
BENEDIKT WECHSLER: I think one important aspect is that we are dealing here with an infrastructure issue. It's not just a tool or a product. It's an infrastructure and a vital infrastructure. And I think that also implies then how the tech companies should and could be part of that. So I think they should build this infrastructure together. And we, at the same time, can learn a lot from the tech companies. Also, internally, for instance, before I took up this position, I never heard of the expression red teaming. But I mean, that's a whole way of working and making products safe, like how you check a car before you put it on the market.
And I think if we work together and adapt these red teaming processes so that we also involve human rights aspects and other safety aspects so that the products that really will come to the market are already in a state developed that they are not being able to use for some malign purposes. And I think we also have to think of new forms of governance where the tech sector is really a responsible constituting part of a governance and not just looking at an issue in terms of maybe a lobby perspective or how can we influence regulation in that or that sense, but to really build the whole house together.
ALI WYNE: Kaja, I think that we often talk about cyber operations in peacetime, and it's an entirely separate matter, different matter when we're talking about cyber operations being used in armed conflict, and Microsoft has been doing a lot of work reporting on Russia's use of cyber attacks in Ukraine. What has it looked like to integrate cyber operations in war - I think really, for the first time - what has the impact been?
KAJA CIGLIC: I think, definitely, for the first time at this scale where we're talking about use of cyber in a conflict, the impact has, in effect, been tremendous. As we look at even just before the war began, the Russians have effectively either prepositioned for espionage purposes or began doing destructive operations in Ukraine that supported their military goals. Over the past year and a half now, we've seen a level of coordination between attacks that are conducted online, so cyber attacks, including on civilian infrastructure, not just as part of the military operations and attacks that were then conducted by traditional military means - so effectively bombs.
And so we've seen definitely a level of similar targets attacked in a similar time period in a specific part of the country. So the alignment between cyber and kinetic operations in war has been, to that extent, something we've never, I think, seen not just Microsoft, but I think in general. The other thing to think about and consider is frequently the Russians have used foreign influence operations, so disinformation, as part of their war effort, often time in connection with cyber operations as well.
This is a tool, a technique that the Russians have used in the past as well. If you look at sort of their traditional foreign influence operation, just not online, in the 70s and the 80s, and that has transposed over to the hack and leak world, and they've used it to both weaken the Ukrainian resolve as well as to undermine their support abroad, particularly, in Europe but elsewhere as well.
And the only reason I would say neither of those have been nearly as successful as perhaps has been expected is the unexpected but wonderful ability for both Western governments and the private sector sort of across the board, irrespective of companies being competitors or anything like that, to come to Ukrainians’ defense.
We think that a lot of the attacks have been blunted also because the Ukrainian government very quickly, at the beginning of the war, decided to migrate a lot of the government data to the cloud, again, with Microsoft but also with competitors, and were thus able to effectively protect and continue operating the government normally, but from abroad.
ALI WYNE: So cyber offenses and cyber defenses it seems are increasing in parallel. So we have this kind of tit-for-tat game. Ambassador, considering this example of hybrid warfare in Ukraine, what are the lessons of the diplomatic community moving forward? And assuming that future armed conflicts will also have similar cyber elements to them, how should the international system prepare?
BENEDIKT WECHSLER: I mean, there's a component of probably the classic disarmament processes. Normally, I think you can expect every sort of nation or state would like to have a position of superiority just to feel safe and to be smarter and more capable than the others so that an attacker wouldn't dare to attack them.
But we arrived in the nuclear arms race to a stage where we had to say this is MAD - it's Mutually Assured Destruction – and although we still have an edge probably, I think, in the cyber world, we can almost, if they really want, we can sort of kill ourselves mutually. So that understanding comes to leads you to a point where probably also states will accept, "Okay, but let's not kill each other totally."
ALI WYNE: That's a good starting point.
BENEDIKT WECHSLER: Yeah.
KAJA CIGLIC: That would be good, yeah.
BENEDIKT WECHSLER: So we'll have to ban some things. Okay. Maybe some thing we just have to sit together, "Well, we should outright ban this." And then we have other things where we cannot ban it, but we have to reduce the negative impact on civilians, on critical infrastructure, on vulnerable persons, and so forth. And so then we come into the story of the International Committee of the Red Cross, the Geneva Conventions. If we can't ban or eliminate war, let's see, at least that we can make it as least impactful for everybody. And of course, now we are coming into totally new terrains with AI as well, the autonomous lethal weapons systems with the drones.
And also, I mean, when you think of the satellites issue, which when you think of a company like Starlink who can more or less decide, "Well, now we can't give you coverage anymore,” so then basically your operation will stop because you don't have the infrastructure anymore to launch an attack. But I think it's something that we have to tackle in the logic of the disarmament on banning or mitigating or limiting effects, but also on very specific items. So we had the Landmines Convention. We had the issue of certain ammunitions that we wanted to be banned. So I think it's going to be very hard, thorny work of diplomats to try to limit this to a maximum possible extent.
ALI WYNE: Even beyond the weaponization of cyberspace, I mean just technology itself is constantly evolving. I mean, just this year alone, we've seen a real explosion in generative AI. As a result, a rush from both governments and the private sector to find a framework, some kind of regulatory framework. How do you view this new factor in terms of cyber diplomacy? How does this new factor affect the work that you're doing on a daily basis?
BENEDIKT WECHSLER: Well, I heard from somebody saying, "AI is the new digital." And, of course, we are also trying to see how can we develop tools based on AI to make diplomacy more efficient, also to make it a tool to provide more consensus on issues because you can probably gather more information, more data to show, "Well, we have a common interest here." And we launched a project. We call it the Swiss Call on Trust and Transparency in AI, where we are not looking into new regulations but rather what kind of formats of collaboration, what kind of platforms that we need to build up in order to get more trust and transparency.
And that builds a lot also on what has been done in the area of cybersecurity, on actions against ransomware. And again, what also Kaja said, it's about that the companies and diplomats or the governments are working together and sharing expertise because it's not a question of competition between private sector, but again, because building an infrastructure, a building that has to be solid and then within that infrastructure we can compete again.
ALI WYNE: Kaja, I want to bring you back into the conversation, and let's just zero in particular on the implications of artificial intelligence for security. How does Microsoft think that AI will play into concerns around escalating cyber conflict? And will AI, I mean, effectively just pour gasoline on the fire?
KAJA CIGLIC: I really don't think so. I think we actually have great opportunity to gain a little bit of an asymmetric advantage as defenders in this space. The reason is, while obviously malicious actors will abuse AI and probably are abusing AI today, we are using AI already, and we'll continue to use it to defend.
In Ukraine, we're using AI to help our defenders on cybersecurity. Microsoft gets 65 trillion - I think, it's some absurd number - signals on Microsoft platforms daily of what's going on online. Obviously, humans can't go through all of that to identify anomalies and neutralize threats. Technology can and technology is, right. So the AIs understand what's wrong - and this has happened already, but this will improve it even further - are looking at, "Oh, okay, this malware attack looks similar to something that has happened before. So I will preemptively stop it independently” right? I think that will actually help us in terms of cybersecurity.
ALI WYNE: Tell us one concrete step that you would like to see taken to get us somewhat closer to a sustainable diplomatic framework for cyber. Kaja, let's start with you, and then we'll bring the ambassador to close this out.
KAJA CIGLIC: I think it'd be really important for the UN to recognize this as a real issue. I think there is a bunch of working groups. We've seen the Secretary General make statements about and call on states to the war. But a permanent body effectively within the United Nations that would discuss some of these issues would be very welcome. At the moment, there's a lot of working groups or group of governmental experts that kind of get renewed for every five or so years, and there's not a dedicated effort necessarily focused on some of these issues.
And then, of course, we would love to find a way for the industry, but the multi-stakeholder community writ large to be able to participate and share their insights and knowledge in this area. Like I was saying at the beginning, there are opportunities. I would say Microsoft and many other private sector groups get blocked a fair amount by certain states. And I get it's a political decision at some level, but it's something that we'd really, really like to see institutionalized - both a process and the multi-stakeholder inclusion.
BENEDIKT WECHSLER: I see sort of a historic window of opportunity opening up with the works on the Global Digital Compact. We have the Summit For The Future next year, the Common Agenda. So a little bit like with the SDGs that we as the world community are coming together and say, "Okay, this is really too important. We are all in this together." Maybe also movies like Oppenheimer are reminding us of some things in the past.
And I'd like to close with Albert Einstein, who said in the 30s of last century that, "Technology advances could have made human life carefree and happy if the development of the organizing power of men, back then and women, I would say today, had been able to keep step with its technical advances. Instead, the hardly bought achievements of the machine age in the hands of our generation are as dangerous as a razor in the hands of a three-year-old child." So I hope that we see this urgency, but also this huge opportunity that we had already once as a humanity, but that we don't fully grasp it this time.
ALI WYNE: KAJA CIGLIC, Senior Director of Digital Diplomacy at Microsoft, Ambassador BENEDIKT WECHSLER, Switzerland's Ambassador for Digitization. Thank you so much for taking the time to speak with me. Thank you so much for taking the time to enlighten our audience. It's been a real pleasure.
KAJA CIGLIC: Thank you. This was a great conversation.
BENEDIKT WECHSLER: Thank you. It was a privilege to be with you.
ALI WYNE: And that's it for this episode of Patching the System. There are more to come. So follow Ian Bremmer's GZERO World feed anywhere you get your podcast to hear the rest of this new season. I'm Ali Wyne. Thanks very much for listening.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
- Podcast: Cyber Mercenaries and the digital “wild west" ›
- Attacked by ransomware: The hospital network brought to a standstill by cybercriminals ›
- Hacked by Pegasus spyware: The human rights lawyer trying to free a princess ›
- The threat of CEO fraud and one NGO's resilient response ›
- Podcast: Cyber mercenaries and the global surveillance-for-hire market - GZERO Media ›
- Podcast: Foreign influence, cyberspace, and geopolitics - GZERO Media ›
- Podcast: Would the proposed UN Cybercrime Treaty hurt more than it helps? - GZERO Media ›
- Podcast: Can governments protect us from dangerous software bugs? - GZERO Media ›
Podcast: A cybercrime treaty proposed by…Russia?
Listen: Cybercrime is a rapidly growing threat, and one that will require a global effort to combat. But could some of the same measures taken to fight criminals online lead to human rights abuses and a curtailing of freedom?
As the United Nations debates a new and expansive cybercrime treaty first proposed by Russia, we’re examining the details of the plan, how feasible it would be to find consensus, and what potential dangers await if the treaty is misused by authoritarian governments.
Our participants for this fifth and final episode of “Patching the System” are:
- Amy Hogan-Burney, General Manager, Microsoft’s Digital Crimes Unit
- Ali Wyne, Eurasia Group Senior Analyst (Moderator)
This special podcast series from GZERO Media is produced in partnership with Microsoft as part of the award-winning Global Stage series. “Patching the System” highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
Podcast: A cybercrime treaty proposed by…Russia?
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
Amy Hogan-Burney:It's clear from this conversation, clear from the work I do every single day, that there is a greater need for international cooperation because as cyber crime escalates, it's clearly borderless and it clearly requires both public sector and the private sector to work on the problem.
Ali Wyne: Welcome to Patching the System, a special podcast for the Global State series, a partnership between GZERO media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group.
Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment for more than 150 global technology companies dedicated to creating a safer cyber world for all of us.
And today in our final installment of this podcast, we will talk a little bit more about protecting businesses, governments, and citizens from cyber crime well into the future. Now, technology moves fast and certainly in the past two years of the pandemic, we've seen our reliance on it grow by leaps and bounds.
Unfortunately, though, cyber crime is also growing and evolving. And while there are treaties and agreements to support international cooperation in combating cyber crime, there are also ongoing negotiations over a new cyber crime treaty at the UN. Some governments claim that such a comprehensive treaty is necessary while others, as well as industry and civil society groups, raise concerns about how such a treaty might affect rights and freedoms we have come to expect online.
My guest today is Amy Hogan-Burney, general manager of Microsoft's digital crimes unit. Amy, welcome.
Amy Hogan-Burney: Thank you. I'm pleased to be here today.
Ali Wyne: I'm just going to dive right in. So for those who aren't deeply working in this space, this cyber space, as it were, on a daily basis, as you do, why don't we just start with some basic definitions? So how do you define cyber criminals and how do you differentiate them from other kinds of bad actors on the internet? So maybe just beginning with some semantics and definitions for our audience.
Amy Hogan-Burney: Sure. It's a great question. Like really, what is cyber crime? I think the answer is generally that cyber crime is using a computer to illegally access or transmit or manipulate data. And it really is for financial gain, but it can also be for political advantage or for geopolitical reasons. In our work, we can see cyber crime affecting individuals. So for example, online child exploitation. Or we can see it affecting property, so theft of intellectual property or other confidential information from businesses. The thing that both of the examples that I just gave you have in common is a computer was used to commit the criminal activity. And it couldn't have been accomplished without an internet connected device. So I can, and I probably would use a computer to plan my bank robbery, but that bank robbery is not a cyber crime, I think is the best way I can describe it.
Amy Hogan-Burney: I just want to make sure everyone knows I'm not going to rob a bank though.
Ali Wyne: No, we'll take your word because you're busy fighting crime. Speaking of which, you've been fighting cyber crime for a long time. And actually your title, it sounds a bit like a network TV crime drama. I just want to remind folks who are listening. you're general manager of the digital crimes unit at Microsoft. When you started, what did cyber crime look like? And now, how have you seen the space of cyber crime evolve?
Amy Hogan-Burney: So first, this question makes me feel old. It made me sound like I've been around for a long time, but it's okay.
Ali Wyne: You've been fighting for a long time. You've been fighting the good fight for a long time.
Amy Hogan-Burney: But I will say when I first started back about 10 years ago, I was at the FBI. And 10 years ago, cybercrime largely looked like denial of service attacks on banks. And the financial institution and the financial sector is really mature in fighting cybercrime, frankly, because they've had to be. It was really used to distract security teams so that they could steal personal data and banking credentials. And it really, in the 10, 15 years ago damaged the reputations of financial institutions. And they've worked incredibly hard at combating cyber crime for that exact reason. And while we still do see DDoS attacks in the financial sector, I would really say cybercrime at this point has evolved to be a threat to national security. And I say that for, I think two reasons.
First is we're seeing cyber criminals attacking critical infrastructure, so healthcare, public health, information technology, financial services, the energy sector, things like that. And then we're seeing ransomware attacks that are increasingly successful. And those are actually crippling governments and businesses. And we also see the profits from this criminal activity really soaring. So, in business email compromise and ransomware and other things. And so we're seeing just, I think an increase in criminals who are able to get more money and they're broadening their scope.
Ali Wyne: You already have sort of touched on in your answer just now the way in which cybercrime has grown in scale, it's become more sophisticated. It originally was targeting primarily financial institutions, but now it's really evolved. And it's almost, for lack of a better phrase, you could say that it's become professionalized. It really has become an industry in and of itself.
What's driving it? One explanation might just be, look, the Internet's growing, and hackers and coders, they're just getting more sophisticated as the internet grows. But is that explanation sufficient or do you think there's more that's going on to explain why we're seeing this surge of cybercrime?
Amy Hogan-Burney: Yeah, I think the answer is that years ago we used to see most of the technology was inside the United States. Most of the criminals and the sophisticated developers were located inside the US. And the actors were largely kind of working alone. They were a very small, tight knit, technically savvy group. And we just, we don't see that anymore. So what we're really seeing at this point is cyber crime as a service where we don't have these technically savvy people committing criminal activity. You don't need to be a programmer. You don't need to be a developer. We have a cybercrime supply chain that are created by big criminal syndicates. And those criminal syndicates sell their services, allowing anyone to conduct this nefarious activity, whether it's for financial gain or for other nefarious purposes.
We're also seeing cyber criminals located around the world. And unfortunately, in many cases, they're operating in permissive jurisdictions. And we're seeing this malicious infrastructure located around the world. So we can see domains or servers located in more jurisdictions than anywhere than we've ever seen before. So I have a case right now, which I'm have happy to describe for you, but I have servers located in Brazil and Bulgaria and Bangladesh, and I just gave you the Bs because that's just what I happened to look at right before I-
Ali Wyne: I was just about to ask, I said, all three of those countries begin with B.
Amy Hogan-Burney: Those are the ones that begin with B. I actually pulled up the spreadsheet and I was like, oh, I'll just pick the B countries.
Ali Wyne: Oh, got it, got it, got it, got it. So you use this phrase, I mean, it's really evocative. It's actually the first time I've heard this phrase, cyber crime supply chain. When we think of supply chains, I at least in my, my limited way of thinking, when I think of supply chains, I think of these supply chains that provision vital commodities. So supply chains for medicines, supply chains for technological inputs that go into our phones and our computers. But I generally have an either sort of neutral association with the phrase supply chain or a favorable one. But to think about a cyber crime supply chain, that phrase that you'd use, it's incredibly evocative. And I think it gives us a sense of the way in which cyber crime has evolved.
I want to turn a little bit to the other side of the ledger. Presumably as cyber crime escalates, so do efforts to fight cyber crime. So the two sort of go hand in hand. And we'll talk a little bit more later about what governments can do, what they are doing. But in your work specifically at Microsoft, what are some of the ways in which you've adapted to the challenges presented by cyber crime? And realistically, what can Microsoft and other companies do to combat it?
Amy Hogan-Burney: Yeah, it's a great question. And I am incredibly fortunate to lead a team that this is all we focus on, is we just focus on identifying cyber criminal actors. And then we refer those to law enforcement through criminal referrals. And at the same time, we also identify the actual technology that's used by cyber criminals. And then we seek to take that down. And we do that either with cooperation with other third-party providers, or with civil cases. And I think it's super helpful to maybe give an example here.
That ties back to my Brazil, Bulgaria, Bangladesh - which is, I think in October of 2020, we decided to take down Trick Bot, which is a very large bot net. But we had a really specific reason for doing that. We were concerned and had heard from the US government that they were worried about any possible disruptions to the US election, which was in November of 2020. And they were concerned that there could be a potential ransomware attack, not on the actual technology used to cast ballots, but there could be a ransomware attack that attacked voter roles or other things, something that could undermine confidence in the election, even if it didn't tamper with election results. And they really didn't want anything to undermine confidence in it for obvious reasons. And so Trick Bot was one of the largest deliverers of ransomware. So we thought, okay, we will go after the delivery system to make sure that there's no ransomware in this case.
So we brought a civil case in October of 2020 to seize all of the infrastructure used by those cyber criminals. At the same time, the US government took their own action, both inside and outside of the US. And we really did a, I think a very, very good job of getting rid of that infrastructure in October of 2020. But this was a little different I think in a couple different ways, we learned a lot back in October. And the first thing that we learned is that the criminals that ran Trick Bot would sell out Trick Bot as a service. And so not only did we affect their infrastructure, we affected their business model. And they were really unhappy with us. And what they did was is they fought back. And one of the things that really I think surprised me is not only did they fight back, but they went to attack hospitals.
Ali Wyne: Goodness.
Amy Hogan-Burney: They did it during the pandemic. And they really were trying to prove that their service still worked by going after a pretty vulnerable population during a really sensitive time. And that was pretty surprising to us. We partnered with law enforcement and incident responders to really protect the healthcare system, but it was a big learning experience. And then the second thing is that they worked furiously to rebuild because like I said, we had impacted their business model. And we are still taking down Trick Bot infrastructure today. So we've kind of moved into this phase that I call the advanced persistent disruption. And those servers that I told you I looked up today on our dashboard in Brazil and Bulgaria and Bangladesh, they are up and running for Trick Bot today for an operation I started back in October of 2020.
We will work with the provider to take those down. They usually takes about 24 hours. But it just shows how we're at a different place where we have a supply chain that we're constantly combating now versus years ago, when we used to be able to do an operation, take something down, and move on. We're really sophisticated and much larger in scope and scale.
Ali Wyne: So you've mentioned the three Bs, that you just, you looked at the dashboard this morning and you looked at Brazil, Bulgaria, and Bangladesh. Just from those three Bs alone, it seems that when we talk about cyber crime, it seems we basically were talking about an issue that defies borders. A single crime, it can take place across several jurisdictions at once. So when you're dealing with an inherently borderless issue or challenge such as cyber crime, what are some of the tools and international instruments that are currently in place to support cooperation with both law enforcement and industry?
Amy Hogan-Burney: First, I think global cooperation is just essential in this space. So places where we have private sector sharing information about cyber threats, where we can work together to seize that criminal infrastructure, because Microsoft is not a law enforcement agency. So while I can do all kinds of things to protect customers, to track threats, to provide notice, and assist victims, it really is essential that law enforcement is also able to seize infrastructure and to arrest the individuals behind this work. And this really wouldn't be possible I don't think without the Council of Europe has a convention on cyber crime, so the Budapest Convention, which is a longstanding, really valuable tool we have in this area. It's been in place over 20 years. It's been ratified by 66 governments across regions. And it really is a guideline for domestic cyber legislation.
I think the other part that's really important to us is that anytime we have international cooperation, we also have a focus on protecting human rights. And so the Budapest Convention also does that as well. The other part I would add is, and I think we've touched on this throughout this conversation already, is just that evolution. Every time we see the internet evolve and products and services and other things grow and evolve, we unfortunately see criminals grow and evolve as well. And that means that we also need to see the legal framework and the conventions kind of grow and evolve. And we see that with the Budapest Convention. So we have kind of additional protocol that have been added. And I think you'll see more this spring, which will have greater support for international cooperation and more access to evidence so that there can be those prosecutions for cyber criminals that law enforcement do.
Ali Wyne: So coming to the present, you talked about the centrality of international cooperation. And so let's turn to the current cyber crime treaty negotiations that are taking place at the United Nations. So just at a kind of like from a bird's eye view, 30,000 foot level, what are those negotiations about? And what's the ultimate goal of this treaty that being negotiated?
Amy Hogan-Burney: Yeah, it's a great question. It's just really hard for me to tell. On the one hand, it's clear from this conversation, clear from the work I do every single day, that there is a greater need for international cooperation because as cyber crime escalates, it's clearly borderless and it clearly requires both public sector and the private sector to work on the problem. Although I am just not certain that I think that a new treaty will actually increase that cooperation. And I'm a little concerned that it might do more harm than good. One of the things that we're constantly thinking about is yes, we want to be able to go after cyber criminals across jurisdiction.
But at the same time, we want to make sure that we're protecting fundamental freedoms, always respectful of privacy and other things. Also, we're always mindful of authoritarian states that may be using these negotiations to criminalize content or freedom of expression. So I get concerned about any treaty that looks like it may impact journalists or political dissidents or any other vulnerable group. And given that the Budapest Convention has been in place for 20 years, we certainly don't want to see anything that undermines our conflicts with the Budapest Convention.
Ali Wyne: Let's talk about the elephant in the room. turning to the prospect of digital authoritarianism, Russia is obviously a major part of these negotiations that we've just been discussing. They not only sponsored the resolution that began these negotiations, but I think pretty surprisingly, they actually released a draft text for the treaty. Can you give us a little bit of an insight into what their proposal entails?
Amy Hogan-Burney: Sure. First I will say, I think everyone was very surprised to see a draft text. Also it's 70 pages.
Ali Wyne: 70 pages? Wow.
Amy Hogan-Burney: Yeah, so I will spare everyone a complete legal analysis. And I will also say it takes quite a commitment to read.
Ali Wyne: Sure.
Amy Hogan-Burney: The draft, I think, is really focused on individual state interests versus kind of broad global cooperation. It's also very broad. Even at the very beginning of the draft, it starts by saying that "It's designed," and this is a quote, "to promote and strengthen measures aimed at effectively preventing and combating crime and other unlawful acts in the field of ICT," which is information and communications technology. Combating crimes and unlawful acts in the field of ICT is just so incredibly broad and very different than the definition that I gave you at the beginning of what cyber crime really is. And so that broad criminalization, I think, really brings up the risk to freedom of expression, privacy and other things.
And also, I think that vague language of just because a computer is involved, that it is part of this treaty, is also concerning. So it brings us back to my bank robbery example. My bank robbery, should I seek to commit it, certainly shouldn't be covered by a cyber crime treaty that's being negotiated at the UN. And so I think the first real big concern is making sure that we're very clear on definitions. And that we're really focused on cyber crime, such as very clear cyber dependent offenses, and that those that are enabled by computers. And that definition, I think really needs to be clarified in any draft.
Ali Wyne: And that's another important distinction that you posited. Between cyber crime in the narrow way, and the more helpful way that you specified at the outset of our conversation, versus cyber enabled crimes or cyber enabled nefarious activity. But I think positing that distinction is really important. And you mentioned that Russia has a draft text. The definition of cyber crime, or I shouldn't even say a definition, it seems that it's more of just it's such an expansive conception, that it can subsume almost any kind of activity involving a computer. So it's so expansive as to be not only unhelpful, but as you said, potentially catch a lot of people in its net who really don't deserve to be caught up.
Ali Wyne: Let me turn, Amy, to another question. Continuing the discussion of this current cyber crime treaty that's being negotiated at the UN. What is the main argument for a new or more inclusive treaty, and how and why could it potentially fight cyber crime more effectively?
Amy Hogan-Burney: So I think anytime I see conversations being had about well scoped or structured international legal instruments to combat cyber crime, I have to be supportive of that because if we can get to a place where there is widely adopted consensus across governments that would allow for kind of common legal frameworks, obligations across jurisdictions that would allow for effective cooperation, and would provide kind of clear and repeatable access to data, this will be helpful in allowing governments to hold cyber criminals accountable. I think this will be helpful for private sector and the public sector to work together to take down that malicious infrastructure. As long as we are making sure that we kind of have a human centric approach to this, which we really need to make sure that there is a right to redress for individuals in case any rights are violated, and that it doesn't expand authorities in a way that allow law enforcement to trample those fundamental rights that we discussed before.
Ali Wyne: Is the biggest challenge in combating cyber crime just the lack of a common framework? How much of an impact would a new treaty have? And let's say a new treaty that would be up to your standards, a new treaty that you would advocate. Let's say that it were to be endorsed and let's say that it were to come into effect. What are some of the outstanding challenges that even a good treaty that lives up to your standards might not be able to solve as cyber crime continues to evolve?
Amy Hogan-Burney:Yeah, I wish a new treaty would solve all my problems. That would be great. I mean, someday maybe a new treaty will work me out of a job, but I don't think so. So I don't think even a perfect treaty will be the silver bullet. But I think one of the big things we see is for many states is that there really needs to be capacity building. So we've talked a lot about the technology that's involved in this, how sophisticated these criminal infrastructures are, how sophisticated the criminal groups are. And so I think more really needs to be done to support law enforcement agencies to up their technological capability, to better preserve and collect information, to share evidence, to perform digital forensics, and so that they can work with others. And so a treaty is great, a legal framework is great, but we need to have people on the ground that are able to implement that. And that I think is one of the most important things.
Ali Wyne: You've given us a sense of what a good treaty might entail, obviously, recognizing that it's not a silver bullet. But let's push on that a little bit more and let's imagine sort of two scenarios. So in scenario one, a treaty isn't reached. So then if a treaty isn't reached, what are some of the possible outcomes? And then scenario two, let's imagine that a treaty is reached, but it doesn't really reflect the traditional understandings of cyber crime. So maybe walk us through those two scenarios and what you think the implications would be for cyber crime in each of those two scenarios.
Amy Hogan-Burney: So if a treaty isn't reached, I don't think that this exercise is all for naught. So I think the first thing is that having the conversations about international cooperation, the conversations about definitional issues about cyber crime, about protecting fundamental rights in this space, really important. And I also think it does raise the issue of the permissive environment that we sometimes see here, where we have nations that are allowing this illegal activity to be conducted in their jurisdictions, it's an unfortunate reality. But by raising this at the international level and in the UN, it does make us and force us to have this conversation. So even if a treaty isn't reached, we have still had that international cooperative conversation.
If a treaty is reached, I think a lot just depends on how many states adopt the treaty and how it is enforced. I would imagine that there could be a change in the level of privacy and freedom of expression for individuals where the countries put this agreement in force. And I also have concerns, frankly, that it could threaten the vision of a global public internet if there are obligations that are inconsistent with our current international framework.
Ali Wyne: Sure. So let's leave the audience with a little bit of hope. What is the ideal or potential new digital world that could be created by a cyber crime treaty that lives up to your standards? Maybe leave us with a little bit of cautious optimism about some of the possibilities of not a brave new digital world, but a better digital world.
Amy Hogan-Burney: Yeah. I always do worry about being just such a Debbie downer when I do these, because I just talk constantly about the threat. So I do like to try to leave with hope. And I think there's hope first in that we're seeing more victims come forward and more transparency and more conversation here. And if we were to see a new cyber crime treaty that enables law enforcement cooperation across jurisdictions, if it improves access to data while protecting fundamental rights, I just think it could go such a long way towards aligning our efforts internationally across governments and across the private sector, and could go a long way towards protecting those victims that I mentioned that we're really starting to see humanize the aspect of cyber crime that is out there.
Ali Wyne: Amy Hogan-Burney, general manager of Microsoft digital crimes unit. Amy, thank you so much for being with us today. Thank you so much for sharing your insights. It's been a real pleasure.
Amy Hogan-Burney: Thank you for having me.
Ali Wyne: And before we go, let's check in once more with Annalaura Gallo, head of the secretariat of the Tech Accord, about what the Tech Accord and its industry partners hope to see from a cyber crime treaty.
Gallo: Last year before the UN negotiations on the new cyber crime convention started together with the CyberPeace Institute and over 60 organizations, the Tech Accord launched a manifesto on cyber crime, highlighting the principles for a cyber crime convention that safeguards human rights online and a free, open and secure internet. The manifesto includes a set of principles that the negotiations and that states that are conducting the negotiations should keep in mind to ensure that the treaty preserves human rights, but also a free and open internet. And first of all, the new cyber crime treaty should protect targets and victims of cyber crime. We think this is a very important point take into account. It should also ensure that there is effective international cooperation across sectors and between the public and the private sector, but also maintain existing international legal obligations. A new cyber crime treaty should not be an avenue for states to reduce their existing obligations. And of course the manifesto is also a focus on the importance of a multi stakeholder approach to the negotiations. So ensuring that non governmental stakeholders were included in the process and we were happy to see that there has been definitely an inclusive approach so far.
Ali Wyne: That's it for this, the final episode of the Patching the System series. Thank you for joining us and be sure to listen to all of our episodes on topics including cyber mercenaries, supply chain attacks, hybrid warfare, and the internet of things. For deep dive discussions with industry experts from the Cybersecurity Tech Accord on the most pressing challenges online. All episodes are available in Ian Bremmer's GZERO world feed anywhere you get your podcasts. And for more information on the Cybersecurity Tech Accord, and its ongoing efforts to give a voice to the industry on matters of peace and security online, you can check out their website at cybertechaccord.org. I'm Ali Wyne, thanks very much for listening.
Podcast: Cyber Mercenaries and the digital “wild west"
Listen: The concept of mercenaries, hired soldiers and specialists working privately to fight a nation’s battles, is nearly as old as war itself.
In our fourth episode of “Patching the System,” we’re discussing the threat cyber mercenaries pose to individuals, governments, and the private sector. We’ll examine how spyware used to track criminal and terrorist activity around the world has been abused by bad actors in cyber space who are hacking and spying activists, journalists, and even government officials. And we’ll talk about what’s being done to stop it.
Our participants are:
- John Scott-Railton, Senior Researcher at the Citizen Lab at the University of Toronto's Munk School
- David Agranovich, Director of Global Threat Disruption at Meta.
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
Podcast: Cyber Mercenaries and the digital “wild west"
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
John Scott-Railton:You go to a growing number of mercenary spyware companies and surveillance companies that basically offer you NSA-style capabilities in a box and say, "Look, you can pay us a certain amount of money and we're going to send you this stuff." You're seeing basically the direct proliferation, not only of those capabilities, but actually national security information about how to do this kind of hacking moving its way right into the private sector.
David Agranovich: They fill a niche in the market, nation states that lack surveillance capabilities themselves, threat actors who want deniability in their surveillance activities and clients like law firms or litigants who want an edge on their competition. In reality, the industry is putting a thin veneer of professionalism over the same type of abusive activity that we would see from other malicious hacking groups.
INTERVIEW
Ali Wyne: Welcome to Patching the System, a special podcast for the Global Stage series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a Senior Analyst at Eurasia Group.
Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for are all of us. And today we're talking about mercenaries and the concept is almost as old as warfare itself. Hired guns, professional soldiers used in armed conflict. From Germans employed by the Romans in the fourth century to the Routiers of the Middle Ages, to modern day security firms whose fighters have been used in the Iraq and Afghanistan wars, as well as the current war in Ukraine.
But our conversation today is about cyber mercenaries. Now these are financially motivated private actors working in the online world to hack, to attack and to spy on behalf of governments. And in today's world where warfare is increasingly waged in the digital realm, nations use all the tools of their disposal to monitor criminal and terrorist activity online.
Now that includes spyware tools such as Pegasus, a software made by the Israel-based cyber security firm NSO Group that is designed to gain access to smartphone surreptitiously in order to spy on targets. But that same software, which government organizations around the world have used to attract terrorists and criminals has also been used to spy on activists, journalists, even officials with the U.S. State Department.
Here to talk more about the growing world of cyber mercenaries and the tech tools they use and abuse are two top experts in the field, John Scott-Railton or JSR, he's a Senior Researcher at the Citizen Lab at the University of Toronto's Munk School and David Agranovich, who now brings his years of experience in the policy space to his role as Director of Global Threat Disruption at Meta. Welcome to both of you.
JSR: Good to be here.
David Agranovich: Thanks for having us.
Ali Wyne: JSR, I'm going to start with you. So I mentioned in my introductory remarks, this Pegasus software. So tell us a little bit more about that software produced by the NSO Group and how it illustrates the challenges that we're here to talk about today?
JSR: So you can think of Pegasus as something like a service, governments around the world have a strong appetite to gain access to people's devices and to know what they're typing in and chatting about in encrypted ways. And Pegasus is a service to do it. It's a technology for infecting phones remotely, increasingly with zero-click vulnerabilities. That means accessing the phones without any deception required, nobody needs to be tricked into clicking a link or opening an attachment. And then to turn the phone into a virtual spy in the person's pocket. Once a device is infected with Pegasus, it can do everything that the user can do and some things that the user can't. So it can siphon off chats, pictures, contact lists but also remotely enable the microphone and the video camera to turn the phone into a bug in a room, for example. And it can do something else, which is it can take the credentials the user and the victim use to access their cloud accounts and siphon those away too and use those even after the infection is long gone to maintain access to people's clouds.
So you can think of it as a devastating and total access to a person's digital world. NSO, of course, is just one of the many companies that makes this kind of spyware. We've heard a lot about them, in part, because there's just an absolute mountain of abuse cases. Some of them discovered by myself and my colleagues around the world with governments acquiring this technology, perhaps some of the rubric of doing anti-terror or criminal investigation but of course they wind up conducting political espionage, monitoring journalists and others.
Ali Wyne: David, let me come to you. So I think that we should just, before we dive into the deeper conversation, getting a little bit into semantics, a little bit into nomenclature. But let's just start with some basic definitions. When most folks hear the phrase cyber mercenary, some of them might just think it's any kind of bad actor, hacker, others of them might draw parallels to real life, analog kind of mercenaries, so sort of hired soldiers in war. So how do you define the phrase cyber mercenary? How does Meta define the term cyber mercenary and why?
David Agranovich: So maybe just to ground ourselves in definitions a bit. My team at Meta works to coordinate disruption and deterrence of a whole ecosystem of adversarial threat actors online. And so that can include things like info ops, efforts to manipulate and corrupt public debate through fake personas. It can include cyber espionage activity, which is similar to what we're talking about today. Efforts to hack people's phones, email addresses, devices and scaled spamming abuse. When we're talking about cyber mercenary groups, I think of that within the broader cyber espionage space. There are people who are engaged in, as JSR talked about, surveillance, efforts to try and collect info on people to hack their devices, to gain access to private information across the broader internet. These are private companies who are offering surveillance capabilities, which once we're essentially the exclusive remit of nation state intelligence services, to any paying client.
The global surveillance-for-hire industry, for example, targets people across the internet to collect intelligence, to try and manipulate them into revealing information about themselves and ultimately to try and compromise their devices, their accounts, steal their data. They'll often claim that their services and the surveillance ware that they build are intended to focus on criminals, on terrorists. But what our teams have found and groups doing the incredible work like Citizen Lab is that they're regularly targeting journalists, dissidents, critics of authoritarian regimes, the family of opposition figures and human rights activists around the world.
These companies are part of a sprawling industry that provides these intrusive tools and surveillance services, indiscriminately to any customer, regardless of who they're targeting or the human rights abuses that they might enable.
Ali Wyne: What strikes me just in listening to your response is not only how vast, how sprawling this industry is, also how quickly it seems to have risen up. I think that just comparing the state of this industry today versus even 10 years ago or even five years ago. How did it rise up? What are some of the forces that are propelling its growth and give us a sense of it, the origin story and what the current state of play of this industry is today?
David Agranovich: As we see it, these firms grew out of essentially two principal factors. The first impunity and the second a demand for sophisticated surveillance capabilities from less sophisticated actors.
On the first point, companies like NSO or Black Cube or those that we cited in our investigative report from December last year, they wouldn't be able to flagrantly violate the privacy of innocent people if they faced real scrutiny and costs for their actions. But also, to that second point, they fill a niche in the market, nation states that lack surveillance capabilities themselves, threat actors who want deniability in their surveillance activities and clients like law firms or litigants who want an edge on their competition. In reality, the industry is putting a thin veneer of professionalism over the same type of abusive activity that we would see from other malicious hacking groups.
Ali Wyne: So JSR, so I want to come to you now. So David has kind of given us this origin story and he has given us a state of play and has really given us a sense of how sprawling this industry is. So, I guess, for lack of a better phrase, there are jobs here, there are jobs in this space. Who's hiring these cyber mercenaries and for what purposes? Who are they targeting?
JSR: There are a lot of jobs. And I think what's interesting, David pointed out the problem about accountability. And I think that's exactly right. Right now, you have an ecosystem that is largely defined only by what people will pay for, which is a seemingly endless problem set. So who's paying? Well, you have a lot of governments that are looking for this kind of capability that can't develop it endogenously and so go onto the market and look for it. I think even after the Snowden revelations, a lot of governments were like, "Man, I wish I had that stuff. How do we get that?" And the answer is increasingly simple. You go to a growing number of mercenary spyware companies and surveillance companies that basically offer you NSA-style capabilities in a box and say, "Look, you can pay us a certain amount of money and we're going to send you this stuff."
And as David points out, a lot of it is done under the sort of rhetorical flag of convenience of saying, "Well, this is stuff for tracking terrorists and criminals." But actually at this point, we probably have more evidence of abuses than we do confirmed cases where this stuff has been used against criminals. Who's doing the work? A lot of the people who go into this industry are hired by companies with names like NSO, Candiru and others. Many of them come out of government, they come out of either doing their military service in a place like Israel in a unit that focuses on cyber warfare or they come out of places like the CIA, the NSA, Five Eyes and other countries' intelligence services.
Which in itself is really concerning because you're seeing basically the direct proliferation, not only of those capabilities, but actually national security information about how to do this kind of hacking moving its way right into the private sector. And we've seen some really interesting cases in the last year of people who came out of The US intelligence community, for example, doing exactly this kind of thing and then pretty recently getting indicted for it. And so my hope is that we're beginning to see a bit of accountability around this but it's a really concerning problem set in part because the knowledge is specialized, a lot of it relates to countries' national security and it's now flowing into a big, sprawling unregulated marketplace.
Ali Wyne: So David, let's build on what JSR just said. So we have this big, sprawling, it seems increasingly unregulated surveillance ecosystem. It's more democratized, they are more individuals who can participate, the surveillance is getting more sophisticated. So I want to go back to your day job. Honestly, you have a big purview, you head up Global Threat Disruption at Meta, which is responsible for a very wide range of platforms. Which groups do you see in your personal capacity, in your professional capacity at Meta, which groups do you see as being most vulnerable to the actions of cyber mercenaries?
David Agranovich: So I think what's remarkable about these types of cyber mercenary groups, as JSR has noted I think, is just how indiscriminate their targeting is across the internet and how diverse that targeting is across multiple different internet platform. When we released our research report into seven cyber mercenary entities last year, we found that the targets of those networks ranged from journalists and opposition politicians to litigants and lawsuits to democracy activists. That targeting wasn't confined to our platforms either. One of the most concerning trends that we saw across these networks and which Citizen Lab has done significant amount of investigative reporting into is the use of these types of technologies to target journalists, often in countries where press freedoms are at risk and the use of these types of technologies, not just to try and collect open source information about someone, but really trying to break into their private information to hack their devices.
Some of the capabilities that JSR mentioned about the Pegasus malware for example, are incredibly privacy intrusive. Ultimately the problem that I see here is these firms effectively obscure the identity of their clients. Which means anybody, authoritarian regimes, corrupt officials, any client willing to pay the money, can ostensibly turn these types of powerful surveillance tools on anyone that they dislike. And so to answer your question, who's most vulnerable? The reality is that anyone can be, it's why we have to take the activities of these types of firms so seriously.
Ali Wyne: So you both have given us a sense of, again, this really sprawling surveillance ecosystem, the growing range of targets, the growing democratization of this kind of nefarious activity. Can you give us a sense of what tactics you've seen lately that are new? I mean, when I think back to some of the earlier conversations we've had in this podcast series, some of the guests we've had have said, look, there are basic precautionary measures that all of us can take, whether we are a technology expert, such as yourselves or whether we're just a lay consumer.
So use different passwords for different platforms, taking basic steps to safeguard our information. But obviously I think that the pace at which individuals can adapt and the pace at which individuals can take preventative measures, I think is invariably going to be outstripped by the speed with which actors can adapt and find new ways of engaging in cyber mercenary activities. So in your time at Meta, have you seen new tactics being used by these groups in recent years and how are you tracking those and identifying them?
David Agranovich: So maybe just to ground our understanding of how these operations work.
Ali Wyne: Sure.
David Agranovich: How do these tactics fit across the taxonomy? We break these operations down into three phases, what we call The Surveillance Chain. The first phase called reconnaissance is essentially an effort by a threat actor to build a profile on their target through open source information. The second phase which we call engagement is where that threat actor starts to try and build a rapport with the target, with the goal of social engineering them into the final phase, which is exploitation. That final step, which most often happens off of our platform is where the target receives malware or a spearphishing link in an attempt to steal their counter data.
Generally, the way we see the tactics throughout these three phases play out is we'll see these operations use social media early in their targeting to collect information to build a profile in the reconnaissance phase or to try and engage with a target and build a rapport in the engagement phases. And then they'll attempt to divert their target to other platforms like malware riddled websites, for example, where they might try to get a target to download a Trojanized chat application that then delivers malware onto their device or other social media platforms where they'll try and exploit them directly.
David Agranovich: I think the most consistent trend we see with these types of operations is adversarial adaptation. What that means is when we do these take downs and when our teams publish reports on the tactic we're seeing or when in open source investigative organizations or civil society groups find these types of networks themselves and disclose what they're doing, these firms adapt quickly to try and get around our detection. It ultimately makes it really important, one, to keep investigating and holding these firms accountable. And two, to essentially follow these threats wherever they may go, tackle this threat as a whole of society problem. That's going to require more comprehensive response if we want to see these types of tools used in a responsible way. But those are, I think, some of the trends we've seen more broadly.
JSR: Mm-hmm (affirmative).
Ali Wyne: And JSR, let me come to you, just in responding to David. So in your own work at Citizen Lab, what kinds of trends are you observing in terms of either targets and/or tactics?
JSR: Well, the scariest trend, and I think we're seeing it more or less wherever we scratch, is zero-click attacks. So it used to be, you could tell people and be Buddhist about it, "Look, detached from attachments. Be mindful of links that can bite." There's a way to do that and in fact, I'm not just pulling that out from nowhere. We worked many years ago with a group of Tibetans who were looking for a campaign of awareness raising to reduce the threat from Chinese threat actors. And so we used this very Buddhist concept of detaching from attachments, stop sending email attachments to each other. Which resulted in a real drop in the efficacy of these Chinese hacking groups as they were trying to find new ways to get people to click on malware. It took a while.
Ali Wyne: Got it.
JSR: But ultimately, per David, we saw adaptation. In general, I think the problem is twofold. One, human behavior is fraught with what we call forever day vulnerabilities, you can't patch. People are vulnerable to certain kinds of things, certain kinds of deception. And so we need to look at platforms and technologies to do part of that work of protecting people and to try to prevent attacks before they reach the level of a victim, having a long, drawn out conversation with somebody. The other thing, of course, that's really concerning, NSO and many others at this point are selling their customers ways to infect devices, whether it's laptops or phones that don't require any user interaction. And obviously this is pretty bad because there's nothing you can do about it as a user, you can keep your device updated but you'll still potentially be susceptible to infections. So you can't really tell people, "Look, here are the three things and if you just do them right, you'll be fine."
The second problem set that it creates is that it makes it a lot harder for investigators like us to find traces of infection quickly. It used to be the case a couple years ago even, that when I would run a big investigation to find cases of, say, NSO targeting, the primary process of investigation would involve finding text messages, finding those infection messages. Even if the forensic traces of the infection were long gone, we could find those. But now we have to do forensics, which means that for defenders and researchers and investigators like us, it creates a much bigger lift in order to get to a place where we understand what's going on with an attack. And that to me is really concerning. People in the government side talk about concerns around encryption causing criminals to go dark. My biggest concern is hacking groups going dark because it's a lot harder to spot when the infections happen. Of course, the harm remains and that's really what we're talking about.
Ali Wyne: I suspect that this will be a phrase that will be new to a lot of listeners or fellow listeners such as myself but when you said, "Detachment from attachment," and I said, "It's such a nice turn of phrase," and I didn't actually realize until you related this anecdote, I didn't realize that it was actually grounded in a professional experience that you had.
JSR: Yeah.
Ali Wyne: But I think it's a compelling mantra for all of us, "Detachment from attachment." I do want to be fair and I want to make sure that we're giving listeners a full picture. And so David, let me come back to you. And so one question I imagine some listeners will have, is that in theory, cyber mercenaries could be used for good? Are there some favorable or at a minimum at least, some legitimate ways that cyber mercenaries can, and/or should be employed? I mean, are there places where they're operating legally? Are there places where they're doing good work? So maybe give us a little bit of a perspective on the other side of the ledger?
David Agranovich: So I'll certainly try but I should preface this by saying, most of my career before I joined Meta was in the National Security space.
Ali Wyne: Right.
David Agranovich: And so I take the security threats that I think some of these firms talk about very seriously. The reality is that law enforcement organizations and governments around the world engage in some of this type of surveillance activity. But what's important is that they do that subject to lawful oversight. And with limitations on their legal authorities, at least in democratic systems. What makes this industry so pernicious and so complicated is, at least as far as we can tell, there's no scalable way to discern the purpose or the legitimacy of their targeting. What's more the use of these third-party services obfuscates who each end customer might be, what they are collecting and how the information is being used against potentially vulnerable groups.
There's essentially just a fundamental lack of accountability or oversight in the surveillance-for-hire industry that makes it hard to determine whether any of this targeting could be considered legitimate. If we wanted to develop a whole-of-society approach to the surveillance-for-hire space and answer your question, we would need to, one, create the oversight and accountability that surveillance tools should receive. Two, hold these companies accountable for how they of tools are used or misused. And three, align through the democratic process on how much these firms should be allowed to do. Until we answer those questions, the surveillance industry will be ripe for abuse.
So one of the interesting things I like to think about is people think that the problem with the mercenary spyware industry is that it sells to autocrats and authoritarians. And of course, it's true. That is part of the problem with the industry because you can guarantee that autocrats and authoritarians are probably going to use this technology in bad ways, in ways that are anti-democratic and problematic. But we now have a couple of year’s experience looking at what happens when big, sprawling democracies from Mexico to India to Poland, get their hands on Pegasus. And what we see is abuses there too.
And so I like to think of the problem set as actually being one, that there are very few customers that you could sell this kind of technology to, that you could sell this really sophisticated surveillance capability to that wouldn't be likely to abuse it. And to me, you have to situate this within the broader problem set, which is authoritarianism is resurgent around the world. And unfortunately, this technology has come time when lots of authoritarians and want-to-be authoritarians are looking for technological ways to get into the heads and phones of their subjects and people around the world. And it's just a very unfortunate thing that these two things are happening at the same time. But I think we can look around the world and say, the mercenary industry is absolutely increasing the speed of authoritarianism in certain country contexts, including in certain democracies that are sliding towards authoritarianism. Hungary would be an example, El Salvador is another, both big Pegasus scandals, both on paper are democratic, but really moving in a concerning direction.
Ali Wyne: I think that context you provided, that geopolitical context is a really helpful backdrop for or an overlay on our broader conversation. Up until now, we've been talking about trends in the digital space and I think you're bringing in this geopolitical element and you put the two together and there's a real prospect of not only resurgent authoritarianism but resurgent authoritarianism imbued with ever more sophisticated technology. So I think that you've given us…You've given us a sense of that digital geopolitical nexus and really a scale of the problem. I want to have you both react just given the scale of this problem, JSR as you've outlined it, I want to get you both to react to a conversation or a snippet of a conversation I recently had with Annalaura Gallo. She's the Head of the Secretariat of the Cybersecurity Tech Accord. And here's what she had to say about cyber mercenaries.
Annalaura Gallo: So the issue here is that we have a private industry that is often legal, that is focused on building very sophisticated, offensive cyber capabilities because these are sometimes even more sophisticated that states can develop. And then they're sold to governments but also other customers. And essentially they're made to exploit peaceful technology products. We know they've also been used by authoritarian governments for surveillance and to crack on political opposition in particular. And we think that all this is extremely concerning because first of all, we are witnessing a growing market. There is a proliferation of these cyber capabilities that could finally end up in the wrong hands. So not only governments but also malicious actors that use these tools to then conduct larger scale cyber attacks. So we don't see how we can just continue in a framework where there is no regulation of these actors because this would just put not only human lives at risk, but also put at risk the entire internet ecosystem.
Ali Wyne: So David, let me come to you. if this nexus of issues is so large, who needs to begin to take responsibility and how? You speak as a representative from a major industry player, Meta. What can the private sector in particular do to mitigate the impact of cyber mercenaries? And maybe if you could just give us a sense of some general industry principles that you'd recommend.
David Agranovich: There's a responsibility, I think, spread across governments, tech companies, civil society and the surveillance industry itself. Governments have the most power to meaningfully constrain the use of these tools. They can hold abusive firms accountable and they can protect the rights of the victims that these firms target. This industry has thrived in a legal gray zone. So the lack of oversight, the lack of regulation has enabled them to grow and appropriate oversight and regulation would go pretty far in curbing some of the worst abuses. Tech companies like ours also need to continue doing what we can to help protect our users from being targeted and to provide people with the tools to strengthen their account security. We need to make it harder for surveillance companies that are part of this industry to find people on our platform and to try and compromise their devices or their accounts.
We routinely investigate these firms. And when we do, we take steps to curb their use of fake accounts, we work to reverse engineer their malware. And then when we do, we share threat indicators or indicators of compromise with other industry players and with the public. So we're also working to help notify the victims when we see them being targeted and that also can help take steps to mitigate the risk. Because these operations are so often cross-platform, they might leverage applications, they might leverage social media websites, they may leverage websites controlled by the attacker. If we see someone being targeted on one of our platforms, we believe that by sending them a notification that we think they are being targeted. And in that notification, giving them specific steps to follow to lock down their cybersecurity presence, hopefully that doesn't just protect them from being targeted on our platform, it also might cut off avenues of attack if a surveillance company is trying to get at them on another way.
Third, civil society also has an important role to play, in particular, in determining what the norms in this space should be. What's acceptable? What's going too far? And how to start creating those expectations more broadly. And then finally, I mentioned, the surveillance industry has responsibilities here. You can see these firms claim, as JSR has noted, that they're just in the business of targeting terrorists and criminals. That's just not what our investigations find.
JSR: I agree with David. I think you have to have consequences and accountability and we are getting there. One of the most interesting things that happened in this space the last couple years was The Commerce Department choosing to list NSO. Now this, of course, limits the ability of American companies to do business with NSO Group. But it had an immediate and radical signaling effect on investors in NSO and the value of NSO's debt plummeted. I think what's interesting about that is that it shows that the industry and the people who are interested in investing in it kind of know how far offsides they are from basic norms and ethics and risks. And the issue is just that for too long there haven't been consequences.
To put this into a bit of a historical perspective. We've been reporting on the mercenary spyware industry for a decade. Things really started changing only in 2019 when WhatsApp and Meta chose to sue NSO Group. That was the beginning of a different phase. Up until that point, NSO had been like the bully on the playground and civil society groups and people working with victims were like the bullied kids. NSO was just a bigger company, more powerful, pouring millions into PR and lobbying.
Suddenly things got a little more complicated for NSO. And then in the last two years, we've seen not only a string of scandals around NSO coming from a place of investigations and research, but also Apple and others joining legal actions against NSO. And then signals from the US Government, both around NSO specifically and more generally towards the mercenary spyware industry. So I think we have a model for what's needed. It looks like legal consequences and accountability for abuses. It looks like serious leaning in by players like Meta, Apple and others using all the tools available, not just technical control measures. It also looks like making sure that governments do their bit and they protect their own citizens and they also make sure that companies that are really the worst offenders, fueling proliferation, are not able to make a big success at it.
And I think we're still learning how some of these things play out but it's been essential to have big platforms leaning in. I see it a little bit like a stool, you have civil society, you have government, and you have the private sector. And we have two legs now, private sector and civil society and that third leg I think is coming. I'm very excited, for example, that the European Union is on the cusp of opening up a Committee of Inquiry into Pegasus and the mercenary spyware industry, more generally, they have a pretty broad mandate. And I just hope to continue to see more governments taking action.
I think when we see that happen, we're also going to see a real shift in the norms of the debate. Because the problem here is not just the tech, it's really the proliferation of that tech. And you solve that problem in the same way that you would solve the proliferation of other kinds of technology that can be used for war and instability. One bug I want to put in the ear of your listeners is this. So we talk about this stuff, as we're talking about the harms that come directly from an attack. So, the harms to an individual or the person that they're in contact with when they get hacked or even to the chilling effect on democracy and civil society somewhere, if all the journalists are being bugged by a greedy autocrat.
But the problem space is actually much larger, as I think some of this conversation has pointed out. If the US Government cannot ensure that its cyber weapons stay outside of the hands of criminal groups, what's the likelihood that mercenary spy war players selling to governments that absolutely cannot get their act together like Togo, for example, is going to prevent these very sophisticated zero-day vulnerabilities and other flaws from being used in a much more vigorous way by cyber criminal groups and others that may get their hands on them? To me, that's one of the biggest concerns because we've been playing fire with this problem since the beginning and mark my words, it's only a matter of time before we see really serious, bad happening here.
Ali Wyne: You mentioned that three-legged stool and you mentioned that we have two prongs of that stool but we need to work on the third one. Obviously a lot of work to do but really grateful that the two of you are involved in that work. John Scott-Railton, Senior Researcher at the Citizen Lab at the University of Toronto's Munk School. David Agranovich, Director of Global Threat Disruption at Meta. Thanks so much for this really terrific conversation.
JSR: Thank you so much.
David Agranovich: Thank you, Ali.
Ali Wyne: That's it for this episode of Patching the System. Next time we'll wrap up this series with a look at the Cybercrime Treaty negotiations underway at the United Nations, and what it could mean for cyberspace globally. You can catch this podcast as a special drop in Ian Bremmer's GZERO World feed anywhere you get your podcast. I'm Ali Wyne, thanks very much for listening.
- Hacked by Pegasus spyware: The human rights lawyer trying to free a princess - GZERO Media ›
- Fooled by cyber criminals: The humanitarian CEO scammed by hackers - GZERO Media ›
- Attacked by ransomware: The hospital network brought to a standstill by cybercriminals - GZERO Media ›
- Podcast: How cyber diplomacy is protecting the world from online threats - GZERO Media ›
- Podcast: Foreign Influence, Cyberspace, and Geopolitics - GZERO Media ›
- Podcast: Cyber mercenaries and the global surveillance-for-hire market - GZERO Media ›
- The devastating impact of cyberattacks and how to protect against them - GZERO Media ›
- How rogue states use cyberattacks to undermine stability - GZERO Media ›
- Why snooping in your private life is big business - GZERO Media ›
Podcast: Lessons of the SolarWinds attack
Listen: Two years after the discovery of one of the largest cyber attacks in history, we’re looking at the current state of security for both software and hardware supply chains.
In early 2020, a group of hackers broke into a software system built and managed by the Texas-based company SolarWinds. The malware they installed was eventually downloaded by thousands of SolarWinds customers, including both private companies and government agencies like the US State Department. SolarWinds has since said the number of clients actually hacked was far lower.
What lessons were learned, and how vulnerable are information and communication technology supply chains today?
In the third episode of Patching the System, a GZERO podcast produced as part of the Global Stage partnership with Microsoft, we’re examining that question with two top experts in the field.
Our participants are:
- Gaus Rajnovic, cybersecurity manager at Panasonic Europe
- Charles Carmakal, senior vice president and chief technology officer at Mandiant
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
Podcast: Lessons of the SolarWinds attack
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
Gaus Rajnovic: Scope and coverage, that is why those supply chain attacks are so dangerous. They spread easily and relatively quickly.
Charles Carmakal: As an average person though, it is impossible for us to really consider all the variety of cybersecurity attacks that are out there. And in general, the right practice is to have some level of trust of the vendors of the software that you use.
Ali Wyne: Welcome to Patching The System, a special podcast for the Global Stage series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from more than 150 global technology companies dedicated to creating a safer cyber world for all of us.
Today we're talking about a cyber attack so massive it became a household name, SolarWinds. In early 2020, a group of hackers broke into a software system called Orion, which was built and managed by the Texas-based company SolarWinds. They installed malicious code and later that spring, it was unwittingly delivered to customers in routine software updates. In total, more than 18,000 clients were affected, including large private companies as well as some government agencies, including the state department and the Department of Homeland Security.
Now the SolarWinds hack is an example of what we call a supply chain attack on information and communication technology, or ICT for short. We're going to talk about what those kinds of attacks are and why they pose a serious and unique threat in the world of cyber attacks.
Joining us now are two industry representatives who work on different sides of this issue. First
Charles Carmakal, who is a senior vice president and chief technology officer at Mandiant, a security research firm working to discover and thwart bad actors who target technology products and services. Charles, it's great to have you here.
Charles Carmakal: Thanks a lot. It's great to be here.
Ali Wyne: We're also joined by Gaus Rajnovic. He serves as a cybersecurity manager at Panasonic Europe, which makes a wide range of technology products, many of which listeners may be familiar with. Gaus, welcome.
Gaus Rajnovic: Hello.
Ali Wyne: Let's dive in first just by providing some additional context around the hacking attack that I mentioned, SolarWinds. Charles you are from Mandiant, which obviously has a relationship to this attack so why don't you explain to us from your perspective first, how bad was this attack? Second, how was it discovered? And third, what was the fallout?
Charles Carmakal: Yeah, absolutely. So I was very close to the event. It really changed my world and I remember it very vividly. So back in December, 2020, I worked for FireEye and that was the organization that ended up detecting and discovering the SolarWinds attack. An employee had registered a second mobile device into our multifactor authentication solution and one of our security analysts had picked that up and noticed that the second enrollment of the secondary device seemed a little bit suspicious. And so he actually reached out to the employee to figure out whether or not he had actually enrolled a second mobile device and he said he didn't.
And that actually kicked off what ended up being probably one of the most notable cybersecurity events in history. And so as the days progressed and as we conducted an investigation to really try to understand how did somebody get access to the credentials of this employee to be able to enroll a second mobile device into our multifactor authentication solution, we started finding evidence of attacker activity.
As part of any investigation, you've got to figure out how did the attackers actually get access to the environment. And as we were digging and digging and digging, the earliest evidence of attacker activity that we saw occurred on SolarWinds' Orion systems that we use at FireEye, but we couldn't tell exactly how did the threat actor get access to those systems. And there were a number of hypotheses that we tested and one of those tests was whether or not there was a potential supply chain attack, essentially meaning was it possible that malicious code was actually sent to our computers through SolarWinds, through a legitimate code process? And what we ended up doing to figure out if this actually was the case, was we reverse engineered these SolarWinds' Orion software. We essentially reverse engineered, tens of thousands of lines of code.
And ultimately, we identified a few thousand lines of code that were heavily obfuscated and looked very suspicious, if not malicious in nature. And the thing that we noticed about it was it was digitally signed by the vendor. So we knew that the code, although it looked suspicious, arguably malicious, we knew that it came from SolarWinds because it was digitally signed unless the digital signatures were stolen or the keys were stolen from SolarWinds. And so as part of the process, we called SolarWinds and we let them know what our findings were. And we got back on the phone with them a few hours later and what they had confirmed to us was that they didn't see any malicious code in the code repositories for the SolarWinds' Orion product.
But what they told us is after the product was built, they see these unauthorized routines that are added to the finished product. And at that point in time, we knew that there was a supply chain attack, and we knew that the legitimate software that was downloaded by thousands of organizations ended up having a malicious component, what we call SUNBURST, which would've allowed a Russian adversary to get access to computer systems that were running the software.
The important thing to note is, this was arguably one of the most expensive cyber weapons that had been developed by a government. And because it was so expensive, the threat actor behind it was very specific in choosing which organizations they were going to leverage this backdoor and this capability to conduct further intrusion activity on. And we suspect that there were less than a hundred organizations that they chose to leverage this capability with.
But those a hundred, some odd organizations are arguably some of the most high profile, hardest to hack organizations out there from both a commercial and a government perspective. The impact was pretty significant in the sense that this enabled the Russian government to get access to data that was of strategic interest to the Russian government. They were very interested in data that governments had, and they were interested in commercial entities that either had access to that same data, or that could facilitate access to the systems or the network of government entities to get that information that was of strategic interest to the Russian government.
Ali Wyne: Wow. I mean, almost felt like I was listening to a new Netflix drama on the SolarWinds attack. Now granted this particular hack took place in 2020, but the narrative that you just related to us, it obviously couldn't be more timely in light of Russia's invasion of Ukraine. Gaus, I want to come to you. Can you give us a basic understanding of what we mean by the phrase supply chain attacks? I mean, why are they dangerous?
Gaus Rajnovic: Absolutely. Let me just say, I am working for a vendor. Part of my answers is basically describing how those attacks influence us vendors, so people who are producing something, but at the same time, how it also affects users of technology.
So coming to your question, scope and coverage, that is what makes them dangerous. They spread easily and relatively quickly. I was also always puzzled why we haven't seen them a little bit earlier because from my vendor side, you were able to see that potential some time ago.
Gaus Rajnovic: And why I'm saying that, let me give example how products are being made. So when you want to create a new product, you would take ready-made components, you just put everything on one pile and then you add a little bit of magic, something that is unique for your product, so that you can be different from the others. But basically, apart from really relatively small number of vendors, most of the vendors are using already made components. So there is a high spread reuse and that is why it spread through our industry and that is why those supply chain attacks are so dangerous.
Ali Wyne: Charles, I want to come back to you in light of the explanation that Gaus just provided and talking about not only sort of software but hardware. When we hear the phrase supply chains, we often think about physical components. So imagine, for example, car parts that are made in one country that are put in cars in another country, but in the case of SolarWinds, we're talking about the targeting of software, not hardware. It's less about a bunch of items that are stored in a warehouse somewhere. Can you explain a little bit more about what is in a software supply chain? And maybe in addition to explaining what is in a software supply chain, help us understand the various points along that chain that potentially could be targeted?
Charles Carmakal: Yeah, yeah. So look, I think there is a general confusion as to what a supply chain attack is and if you asked somebody what a supply chain attack is back in the summer of 2020, they'd give you a very different definition than what they would've told you in January of 2021. And many of our definitions changed in December, 2020 when we heard about the SolarWinds attack.
Ali Wyne: Sure.
Charles Carmakal: And that's kind of because of the ubiquity of SolarWinds and because of how prevalent the attack was. So what happened with SolarWinds is a threat actor found a way to insert malicious code into a legitimate product that ended up getting shipped out to a variety of customers across the globe.
And that software was authenticated by SolarWinds and it was part of a legitimate software update process that SolarWinds had established to allow people to get updated version of their software. And when they say 18,000 some odd organizations could have potentially been impacted by SUNBURST or by this attack, that's really the estimated count of the number of organizations that had legitimately downloaded the software update.
Another way to look at a supply chain attack is, can a threat actor break into one company, perhaps a service provider, and leverage that access to that one service provider and get access to dozens or hundreds or thousands of other organizations because of the legitimate connectivity between that service provider and their thousands, some odd customers?
And we see attacks like this all the time.
Ali Wyne: Gaus, Panasonic is a company that obviously makes a lot of consumer devices. So are there similar cyber vulnerabilities in the hardware supply chain? How can hardware supply chains be targeted?
Gaus Rajnovic: Oh, well, yes. I mean, nowadays, and especially in IT hardware, it is just software in disguise because usually, I mean, if possible, you would like to have a chip that you can use for multiple purposes. So basically you will have a hardware that then you program and then you change the function as needed. So yeah, absolutely. But not if we move a little bit away from that and look generally how hardware and chips in particular, about vulnerabilities in them, it is a longstanding concern and especially for, well, Western governments, because what is happening is that obviously they need to procure products for their environment whenever they're using it. And sometimes that environment tend to rely on relatively old chips, which are not being produced by, let's say, Western vendors anymore. So they need to procure it from elsewhere. Now you have a question, do we really trust that those chips would not contain any undocumented and unwanted functionalities?
So you do have the whole science of testing and verifying that what you are getting is actually what you want to do and that there is nothing else in it. And it is hard.
Ali Wyne: So Charles, let me come back to you. So software supply chains are being targeted, hardware supply chains are being targeted. Well, obviously as listeners, I think all of us, regardless of what our position is or what kinds of devices we have, we all want to take steps to enhance the security of our devices, but I've got to be candid with you. When I think about how many times, just in a given day, how many times my phone asks me to install certain updates, my laptop asks me to install certain updates, my instinct is to install them because I want to enhance the security of my devices, but it's pretty overwhelming to think about whether or not we can actually trust all of these updates that we were being asked to install. Isn't it?
Charles Carmakal: It absolutely is. And a funny argument that was being made during the SolarWinds supply chain days was that the organizations that weren't impacted were the ones that were really late to apply the patch. And so some of them joke that they just had such bad patch management processes that they didn't have to deal with the cybersecurity implications of the attack.
Ali Wyne: Yeah.
Charles Carmakal: As an average person though, it is impossible for us to really consider all the variety of cybersecurity attacks that are out there. And in general, the right practice is to have some level of trust of the vendors of the software that you use, assuming that you're using legitimate commercial software or open source software, and it's to apply the patches that are made available because the vast majority of the time, the patches will actually increase the overall security posture of your system, of your device, of whatever it is that you're installing the patch on.
It's the edge cases where attackers get access to software supply chains and are able to modify legitimate code and insert malicious code there. It happens. I could list off a few dozen examples where it's happened in the past, but for all the times that it's happened in the past, I mean, that's representative of less than 0.001% of all the software updates and the patching that goes on out there. So I think in general, as an average, everyday person, patching is good, patching provides additional security capability and benefits. And when there are situations where patches are actually malicious, usually the community figures that out and they share that information broadly and people take corrective action.
Ali Wyne: No, that's helpful. That's helpful to know. Gaus, I want to come back to you and I want to make a little bit of an analogy between some of the supply chain vulnerabilities that we've been discussing and some of the vulnerabilities that we've seen in the time of COVID and I want to ask you if you can reflect a little bit on that comparison. So specifically, we know that supply chain issues earlier on in the product life cycle, say in raw materials, for example, that they can have a much greater impact later on when products get closer to consumers.
And we saw something kind of similar to this in global supply chain issues we faced, and that we continue to face during the coronavirus pandemic. So for example, if the raw materials were delayed a week due to labor shortages, that might mean that the next step of production was delayed two weeks or the next step four weeks, and so on. Do you think that it's valid to apply kind of a similar model to ICT supply chain attacks? And that is to say, the earlier in the process we see a vulnerability, the worse and broader the negative result and impact might be later on?
Gaus Rajnovic: Unfortunately, the answer is yes. Yes. There is also additional, how to say, complication to that, because usually the earlier in the chain you get, those really basic components they tend not to be so sexy so people don't really pay that much attention to them. And they'll just say, "Yeah, yeah, fine. I mean, that was working for the last 20, 30 years. Why should I look at it? Just take it and run." And I hear one great example for that. So there is something that is called Abstract Syntax Notation Number One, or in short ASN 1. It is something that is being used everywhere. Not many people heard about it, but it is a way how you would format data before you send it over the network to another peer, approximately 20 odd years ago, there was a vulnerability in ASN 1 library. So basically, the way how that data format was unpacked and processed, there was a vulnerability there. It was very bad vulnerability.
So you were able to do remote code execution on many, many devices by just sending a packet to a device and that packet and that vulnerability would be triggered the moment device start processing packet way, way, way before any other logic kicks in. So it was really basic stuff. If you go and look for those old advisories and you would still find them and just look number of the vendors which are affected, well, that used to be affected by that vulnerability, it is staggering. And also, if you look what protocols were implicated, meaning they were using and they still use ASN 1 notation. So it is absolutely unbelievable. But the worst thing is that we are now almost 20 years after that incident and I am not sure that all vendors still patched it because there are still plenty of old vulnerable libraries lying around and some small vendors, they would just take it and run it and nobody would look at it because, "Hey, it was working for the last 20, 30 years. Why bother? It works." So unfortunately, yes, the sooner you get into supply chain, the more damage you can make.
Ali Wyne: That's sobering and it's sobering to think that even 20 years on, 30 years on, that we still have some of these initial vulnerabilities that companies haven't adequately addressed. Charles, you talked about SolarWinds. It affected 18,000, right? I should say it targeted 18,000 organizations, but as you mentioned, the attacks themselves were only actually executed against a handful of government organizations. So let's say that I'm a business owner who didn't have my systems disrupted, can you help the listeners understand what might be going on? Why it might be the case that I, as a business owner, didn't have my systems disrupted by the SolarWinds attack?
Charles Carmakal: Yeah. And just to clarify, so the threat actor leveraged the SolarWinds backdoor to get access to both government and commercial entities.
Ali Wyne: Right.
Charles Carmakal: So there were a number of technology companies and a variety of other companies that weren't government entities that were targeted. Those companies, as you think about the victimology, I mean, a lot of those companies were pretty large organizations with very large customer bases that potentially could have been the next SolarWinds. And in fact, we may not be calling it the SolarWinds attack if it hadn't been detected when it was detected. What I mean by that is the threat actor, I believe, was surprised that they got caught when they did. I think they were surprised when we outed them. I think they probably thought that they had months or years to continue to do what they were doing. They were doing things in such a clandestine and quiet manner, and they were doing it relatively slowly because they didn't want to get caught. They wanted to keep stealing information, again, that was a strategic interest to the Russian government.
Ali Wyne: Sure.
Charles Carmakal: And I think that they were interested in continuing to create other supply chain attacks. When you look at some of the companies that they broke into, I mean, they broke into security companies, they broke into technology companies. I do think that in a way we're all lucky that it was detected when it was, because I think we very likely stopped what could have been the SolarWinds attack, plus the technology company X's attack, plus the technology company Y's attack that a lot of people would've probably talked about.
For companies outside of the big companies that were targeted, so you just think about just any other company out there, look, most organizations aren't of interest to the Russian government. When Russia conducts offensive operations, they do it for a reason. Sometimes they do it because there is political reason. Sometimes they do it for national defense purposes. Sometimes they do it because they're embarrassed by something. And so you think back to 2016, and no, I'm not going to give the example about the US presidential elections, because everybody talks about that, but I'm going to give a different example. I'm going to talk about the attacks against the Anti-Doping agencies. And there are a number of them that were hacked. And essentially what happened was there were Russian athletes that were accused of doping and it was made known that the Russian government was aware of the performance enhancement drugs that Russian athletes were using.
And so the Russian government wanted to prove that the rest of the world uses performance enhancement drugs and they dope and they hacked into a number of the anti-doping agencies and they ended up publishing a lot of information related to athletes that had failed tests before certain sporting events. And so that is an example of one of the reasons why the Russian government may conduct intrusion operations. There's usually a specific reason. Today as we think about the invasion of Ukraine, I think it's interesting that we haven't seen destructive attacks against the Western world yet. Now leading up to the Ukrainian invasion, we definitely saw a number of intrusions against ministries of foreign affairs and a variety of countries by Russian threat actors to steal information, again, of strategic interests to the government.
We definitely saw very destructive and disruptive attacks against organizations in Ukraine, leading up to the invasion and then coinciding with the invasion. But we haven't seen the attacks against other Western organizations. And I think the anticipation and the fear is that the Russian government tends to go tit for tat. They will very likely target the sectors that were most impacted in Russia, but in other parts of the world. So when you think about the sanctions against Russian entities, there's a very good chance or least the belief is that the Russian government will conduct some kind of cyber operation against the US financial services sector. There's fear and anticipation that the Russian government will target energy sector organizations. There's also fear that Russia will look at who are the companies that are publicly aligning with Ukraine and very vocally standing up against Russia and those are very likely going to be targets of Russian espionage and destructive operators.
Ali Wyne: This kind of actually goes to the name of this podcast series, patching the system. Gaus I want to come back to you so obviously, prevention is probably better than patching things up after an attack. Supply chains are getting more and more complex, more and more complicated. The vast majority of everything comes from somewhere else. I mean, what can companies realistically do to protect themselves?
Gaus Rajnovic: Well, trust, but verify. So I would like to split this answer into two. So one is from perspective of organization as a consumer of technology. So obviously you cannot go and, I don't know, disassemble and analyze each and every line of a patch that you receive and do that constantly for each and every device. It is just out of question, but what you can do is that you are monitoring what is going on in your environment. So that should be a part of your normal security operation center or CERT, or some other security function that you have, or should have internally. So you could trust that what you receive is legitimate and you trust that everything should be fine, but still you need to go and verify that things that you are seeing inside your system matches what you would expect to see inside the system.
And Charles said that at the beginning, how they discovered just because they spotted an anomaly. The other part of my answer is when I look this question through my vendor's eyes, what I can do as a vendor to make sure at least what I'm receiving is more or less legitimate or what I'm expecting, so that what I am giving further up supply chain is also do not contain any defects or vulnerabilities or anything else. And again, the same principle, trust, but verify, so all my suppliers. You can start with some really basic things. For example, I would like to know, contact in each of my suppliers who is handling product security vulnerability, meaning that if I find something in that component, I know who to call. Another, very basic thing, soft bill of material or bill of material in general. I would like to know what is inside component that I am receiving.
I would also like to know what tests my supplier have done on that component and what are results. And then I would repeat the testing myself just to make sure that everything matches. So those are some very basic things, but they help a lot. And I am sometimes ashamed to say that industry today, even with all this advancement, we are still having our vendors, and I'm not talking about really small vendors, I'm talking about midsize vendors, that we do not have anybody to call when we have a vulnerability in their product, and we need to fix that.
Ali Wyne: Charles, you heard what Gaus just said. I mean, what are some of the ways in which software and hardware manufacturers can come together to help build trust and to help verify? I mean, how should they coordinate their efforts, how should they keep their lines of communication open, especially because hardware increasingly needs constant updates to software?
Charles Carmakal: Yeah, absolutely. I mean, I definitely agree with Gaus on the trust but verify. And I think there's a certain amount of trust that organizations have to place in their vendors that they're doing the right things and I think most people want to do the right things. It sometimes becomes cost prohibitive to do so, so it doesn't always happen. I think having more transparency amongst security issues that exist that people identify in doing timely fixes is really important. And I'll tell you, it's definitely frustrating to security researchers that identify security vulnerabilities and in particular products, they notify the vendors and they never get an acknowledgement that the vendors heard that there's a vulnerability, or sometimes the vendor will say, "There isn't a vulnerability." Sometimes they'll say, "There is a vulnerability, but we'll get around to fixing it whenever we get around to fixing it."
And so responses like that can be really demoralizing for security researchers. Being as transparent as you can, trying to be timely with fixes, sharing information, and collaborating as a community are really important to addressing this problem.
Look, I'll tell you, things have dramatically changed for the better in terms of the collaboration of vendors and the security community. 20 years ago, we all used to laugh at Microsoft and say that they have arguably a terrible cybersecurity program. But when I look at probably one of the best case studies of an organization that's dramatically changed people's perceptions of them and just how much time and effort and care they put into security, I mean, Microsoft's one of the best examples.
Ali Wyne: So as both of you know, in an earlier podcast episode, I spoke to Annalaura Gallo. She's the head of the secretary of the Cybersecurity Tech Accord. Here's what she had to say about new approaches to combating ICT threats.
Annalaura Gallo: So we've been engaging with UN in the context of the dialogues on responsible state behavior in cyberspace, and we will continue to do so. And we have been encouraging state in particular to introduce a new norm that clearly declares this cyber attacks against the ICT supply chain out of bounds. But our signatories have also been calling for a shift in the way governments and businesses defend from this type of attacks, because taking a purely defensive approach is no longer enough. Organizations should start and think like the attackers.
Ali Wyne: So Gaus, why don't we start with you? What is your reaction to Annalaura's assessment?
Gaus Rajnovic: I agree with assessment. Yes, we need to change things because obviously the way they are at the moment, it is not ideal situation and we would like to improve it. Having said that, so there are multiple points and first point, for example, being somehow putting a supply chain out of bounds, so you should not attack supply chain. It is admirable goal. Personally, I am skeptical that it will be ever achieved just because supply chain is so big. I mean, when you look at single product, how many components are coming and from where, and to how many hands that product is passing, I mean, it is really hard to say, "Okay, so we will just limit everything that relates to supply chain. It is out of bound. We will not attack it. We will not mess with it, but we will go after everything else."
And please bear in mind that there are also initiatives, for example, some national infrastructure, which is necessary for a civilian to live, also to be put out on bounds and things like that. I'm afraid in flux of wars, it doesn't work that way. So admirable, but skeptical towards it. If I may just quickly?
Ali Wyne: Sure.
Gaus Rajnovic: There is another point of not being only defensive. So yes, I heard those ideas multiple times, which is suggesting that we should also go into offensive in a sense, at least to have an active defense. Again, personally, I'm not in favor of active defense because in my mind, imagine two gun slingers in old Wild West. They would just come to the streets. Everybody would move aside. Then they would have a shootout and then pack their things and go away to continue this duel at some other times.
But what will happen is that after that duel, there will be lots of bullets in the walls, broken windows, and somebody will have to go and fix that. So just apply this metaphor into a cyberspace. And yes, you will have bad guys and good guys, and they will having go at each other but in the meantime and then throughout that battle, there will be lots of broken stuff, which somebody will have to fix later. And unfortunately, some of that stuff that has been broken during the conflict, there are people whose life depends on it. And that is the reason why I'm not overly fond of active defense.
Charles Carmakal: Look, I think there are general rules of engagement that certain countries operate by. So for example, when you look at the Chinese government, they typically conduct operations for political or military purposes. They used to also conduct offensive operations for economic purposes, but there was an agreement with president Obama and president Xi where nobody acknowledged hacking into companies in each other's countries, but they said in the future, they wouldn't do it for economic espionage. So I think rules of our engagement are really important. That helps figure out what is the line, when does it get crossed, or what events would be considered crossing the line and a potential escalation. And that's something that we're all thinking about right now. If what happened in Ukraine a few years ago with NotPetya happened in the United States, I'd be afraid to figure out and to see what the United States might do in retaliation.
And so we're all trying to figure out and I think Russia and many governments are very mindful of at what point in time does a cyber operation cross the line and force the victim, or maybe a collection of countries to retaliate in a very escalatory way. And then at what point in time does the retaliation include kinetic consequences? So for example, when we look at the Colonial Pipeline incident from last year, from my perspective, our assessment is that the Colonial Pipeline intrusion was done for financially motivated purposes by organized criminals. We don't believe that Putin had directed that, or at least we have no evidence that Putin had knowledge or directed that. But it'd be a very different thing if there was evidence of Putin directly asking for the intrusion of Colonial Pipeline that would have an impact on the supply chain of gas getting to airplanes and vehicles.
And so, again, I think we're all thinking about the escalation. So it is good to have rules of engagement, but the downside of having rules of engagement is basically everything that you are not expressly saying is prohibited. Is everything else okay to do? So if you say-
Ali Wyne: Right.
Charles Carmakal: "Protect the supply chain," or, "You can't attack healthcare," is it then fair game to hack into financial services companies, into manufacturing companies, and schools, right? So that's the counterpoint. So it's hard to tell, but I think the rules of engagement, generally speaking, are good.
Ali Wyne: I would guess that most of the folks who are tuning in to today's podcast, don't have your expertise either on the hardware side or on the software side. Most of the listeners, I would guess, are just lay consumers. And so for us lay consumers, what can we do? What can we do to enhance the security of our devices, to mitigate some of the issues that we've talked about today?
Charles Carmakal: Yep. There's three things and these are three very important things that are, unfortunately, somewhat hard to do. The first thing is we strongly encourage everybody to use a password manager and have a unique and different password for every single website that you use. And the reason for that is because when a threat actor hacks into a particular website or a company, one thing that they do is they download all the usernames and the passwords for all the users that use that resource. And they attempt to use those usernames and passwords for other websites. It could be email accounts or bank accounts or social media accounts, but that is a very prominent way for how threat actors break into organizations and break into accounts and steal data from people. Number two, use multifactor authentication. So that's basically where you provide an username, a password, and then some secondary form of authentication.
Sometimes it's a code that gets texted to you over your phone, sometimes it's an email that gets sent to you with a code, or maybe there's an app on your phone that gives you a randomly generated number. That helps mitigate the risk of somebody getting access to your account. And the third thing is apply software patches when your device tells you to apply software patches, but try to be cautious and careful. You don't want to get tricked into clicking a link on a website that's telling you to apply a patch, but that's actually malicious. You want to get comfortable and familiar with where would a computer or a device tell you to apply an update and just get comfortable with that and do it when they're asked and try to be mindful and try to be aware that there are trickers or scammers and threat actors that will attempt to trick people into applying patches that aren't real that are actually malicious.
Gaus Rajnovic: I totally agree with Charles, what he said, and those are the things that the consumer can do. I mean, looking from perspective of somebody who is making those products, they need to be as easy to use as possible. And unfortunately, that also limits what other actions users can do. At the other end, also, if I have a product that everybody needs to tweak, I don't know, two hours before they use it, I will not sell that product at all because people want functionality and people want stuff to work as is. So password management, absolutely must, do it. There is one thing that, at least on producing side, we see that it is coming, something that is called security labels. So there are several governments around the world who are toying with that idea. So basically that you have a label on a product that we tell you how secure the product is more or less. It is not finalized yet.
There are some pilot schemes going in Finland and Singapore. Most likely it is something that will come to the market in the next, I don't know, three to five years or thereabouts. So that will be also something that later on consumers could look for and try to find if this product is more secure than alternative and then obviously try to base their purchasing decision on that label. But we are not there yet at the moment.
Ali Wyne: Charles Carmakal, senior vice president and chief technology officer at Mandiant, Gaus Rajnovic, cybersecurity manager at Panasonic Europe, thank you both so much for being here.
Charles Carmakal: Absolutely.
Gaus Rajnovic: Pleasure was mine.
Ali Wyne: Well, that's it for this episode of Patching the System. You can tune in next time for more on the future of cyber threats and also what we can do about them. You can catch this particular podcast as a special drop Ian Bremmer's GZERO World Feed anywhere you get your podcast. I'm Ali Wyne. Thanks very much for listening.