Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Biden will support a UN cybercrime treaty
The Biden administration is planning to support a controversial United Nations treaty on cybercrime, which will be the first legally binding agreement on cybersecurity.
The treaty would be an international agreement to crack down on child sexual abuse material, or CSAM, and so-called revenge porn. It would also increase information-sharing between parties of the treaty, increasing the flow of evidence the United States, for one, has on cross-border cybercrime. This will also make it easier to extradite criminals.
But the treaty has faced severe pushback from advocacy groups and even Democratic lawmakers. On Oct. 29, six Democratic US senators, including Tim Kaine and Ed Markey, wrote a letter to the Biden administration saying they fear the treaty, called the UN Convention Against Cybercrime, could “legitimize efforts by authoritarian countries like Russia and China to censor and surveil internet users, furthering repression and human rights abuses around the world.” They said the treaty is a threat to “privacy, security, freedom of expression, and artificial intelligence safety.”
The senators wrote that the Convention doesn’t include a needed “good-faith exception for security research” or a “requirement for malicious or fraudulent intent for unauthorized access crimes.” This runs afoul of the Biden administration’s executive order on AI, which requires “red-teaming” efforts that could involve hacking or simulating attacks to troubleshoot problems with AI systems. The UN will vote on the Convention later this week, but even if the United States supports it, it would need a two-thirds majority in the US Senate — a difficult mark to achieve — to ratify it.Podcast: Can governments protect us from dangerous software bugs?
Listen: We've probably all felt the slight annoyance at prompts we receive to update our devices. But these updates deliver vital patches to our software, protecting us from bad actors. Governments around the world are increasingly interested in monitoring when dangerous bugs are discovered as a means to protect citizens. But would such regulation have the intended effect?
In season 2, episode 5 of Patching the System, we focus on the international system of bringing peace and security online. In this episode, we look at how software vulnerabilities are discovered and reported, what government regulators can and can't do, and the strength of a coordinated disclosure process, among other solutions.
Our participants are:
- Dustin Childs, Head of Threat Awareness at the Zero Day Initiative at Trend Micro
- Serge Droz from the Forum of Incident Response and Security Teams (FIRST)
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
TRANSCRIPT: Can governments protect us from dangerous software bugs?
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
DUSTIN CHILDS: The industry needs to do better than what they have been doing in the past, but it's never going to be a situation where they ship perfect code, at least not with our current way of developing software.
SERGE DROZ: I think the job of the government is to create an environment in which responsible vulnerability disclosure is actually possible and is also something that's desirable.
ALI WYNE: If you've ever gotten a notification pop up on your phone or computer saying that an update is urgently needed, you've probably felt that twinge of inconvenience at having to wait for a download or restart your device. But what you might not always think about is that these software updates can also deliver patches to your system, a process that is in fact where this podcast series gets its name.
Today, we'll talk about vulnerabilities that we all face in a world of increasing interconnectedness.
Welcome to Patching the System, a special podcast from the Global Stage Series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from more than 150 global technology companies dedicated to creating a safer cyber world for all of us.
And about those vulnerabilities that I mentioned before, we're talking specifically about the vulnerabilities in the wide range of IT products that we use, which can be entry points for malicious actors. And governments around the world are increasingly interested in knowing about these software vulnerabilities when they're discovered.
Since 2021 for example, China has required that anytime such software vulnerabilities are discovered, they first be reported to a government ministry even before the company that makes a technology is alerted to the issue. In the European Union, less stringent, but similar legislation is pending, that would require companies that discover that a software vulnerability has been exploited to report the information to government agencies within 24 hours and also provide information on any mitigation use to correct the issue.
These policy trends have raised concerns from technology companies and incident responders that such policies could actually undermine security.
Joining us today to delve into these trends and explain why are Dustin Childs, Head of Threat Awareness at the Zero Day Initiative at Trend Micro, a cybersecurity firm base in Japan, and Serge Droz from the Forum of Incident Response and Security Teams, AKA First, a community of IT security teams that respond when there's a major cyber crisis. Dustin, Serge, welcome to you both.
DUSTIN CHILDS: Hello. Thanks for having me.
SERGE DROZ: Hi. Thanks for having me.
ALI WYNE: It's great to be talking with both of you today. Dustin, let me kick off the conversation with you. And I tried in my introductory remarks to give listeners a quick glimpse as to what it is that we're talking about here, but give us some more detail. What exactly do we mean by vulnerabilities in this context and where did they originate?
DUSTIN CHILDS: Well, vulnerability, really when you break it down, it's a flaw in software that could allow a threat actor to potentially compromise a target, and that's a fancy way of saying it's a bug. They originate in humans because humans are imperfect and they make imperfect code, so there's no software in the world that is completely bug free, at least none that we've been able to generate so far. So every product, every program given enough time and resources can be compromised because they all have bugs, they all have vulnerabilities in them. Now, vulnerability doesn't necessarily mean that it can be exploited, but a vulnerability is something within a piece of software that potentially can be exploited by a threat actor, a bad guy.
ALI WYNE: And Serge, when we're talking about the stakes here, obviously vulnerabilities can create cracks in the foundation that lead to cybersecurity incidents or attacks. What does it take for a software vulnerability to become weaponized?
SERGE DROZ: Well, that really depends on the particular vulnerability. A couple of years ago, there was a vulnerability that was really super easy to exploit: log4j. It was something that everybody could do in an afternoon, and that of course, is a really big risk. If something like that gets public before it's fixed, we really have a big problem. Other vulnerabilities are much harder to exploit also because software vendors, in particular operating system vendors have invested a great deal in making it hard to exploit vulnerabilities on their systems. The easy ones are getting rarer, mostly because operating system companies are building countermeasures that makes it hard to exploit these. Others are a lot harder and need specialists, and that's why they fetch such a high price. So there is no general answer, but the trend is it's getting harder, which is a good thing.
ALI WYNE: And Dustin, let me come back to you then. So who might discover these vulnerabilities first and what kinds of phenomena make them more likely to become a major security risk? And give us a sense of the timeline between when a vulnerability is discovered and when a so-called bad actor can actually start exploiting it in a serious way.
DUSTIN CHILDS: The people who are discovering these are across the board. They're everyone from lone researchers just looking at things to nation states, really reverse engineering programs for their own purposes. So a lot of different people are looking at bugs, and it could be you just stumble across it too and it's like, "Oh, hey. Look, it's a bug. I should report this."
So there's a lot of different people who are finding bugs. Not all of them are monetizing their research. Some people just report it. Some people will find a bug and want to get paid in one way or another, and that's what I do, is I help them with that.
But then once it gets reported, depending on what industry you're in, it's usually like 120 days to up to a year until it gets fixed from the vendor. But if a threat actor finds it, they can weaponize it and it can be weaponized, they can do that within 48 hours. So even if a patch is available and that patch is well-known, the bad guys can take that patch and reverse engineer it and turn it into an exploit within 48 hours and start spreading. So within 30 days of a patch being made available, widespread exploitation is not uncommon if a bug can be exploited.
ALI WYNE: Wow. So 48 hours, that doesn't give folks much time to respond, but thank you, Dustin, for giving us that number. I think we now have at least some sense of the problem, the scale of the problem, and we'll talk about prevention and solutions in a bit. But first, Serge, I want to come back to you. I want to go into some more detail about the reporting process. What are the best practices in terms of reporting these vulnerabilities that we've been discussing today? I mean, suppose if I were to discover a software vulnerability for example, what should I do?
SERGE DROZ: This is a really good question, and there's still a lot of ongoing debate, even though the principles are actually quite clear. If you find a vulnerability, your first step should be to actually start informing confidentially the vendor, whoever is responsible for the software product.
But that actually sounds easier than it is because quite often it's maybe hard to talk to a vendor. There's still some companies out there that don't talk to ‘hackers,’ in inverted commas. That's really bad practice. In this case, I recommend that you contact a national agency that you trust that can mediate in between you, and that's all fairly easy to do if it's just between you and another party, but then you have a lot of vulnerabilities in products for no one is really responsible, take open source or products that actually are used in all the other products.
So we talking about supply chain issues and then things really become messy. And in these cases, I really recommend that people start working together with someone who's experienced in doing coordinated vulnerability disclosure. Quite often what happens is that within the industry affected organizations get together, they form a working group that silently starts mitigating this spec practices, that you give the vendor three months or more to actually be able to fix a bug because sometimes it's not that easy. What you really should not be doing is leaking any kind of information, like even saying, "Hey, I have found the vulnerability in product X," it may actually trigger someone to start looking at this. So this is really important that this remains a confidential process where very few people are involved.
ALI WYNE: So one popular method of uncovering these vulnerabilities that we've been discussing, it involves, so-called bug bounty programs. What are bug bounty programs? Are they a good tool for catching and reporting these vulnerabilities, and then moving beyond bug bounty programs, are there other tools that work when it comes to reporting vulnerabilities?
SERGE DROZ: Bug bounty programs are just one of the tools we have in our tool chest to actually find vulnerabilities. The idea behind a bounty program is that you have a lot of researchers that actually poke at code just because they may be interested, and at the company or a producer of software, you offer them a bounty, some money. If they report a vulnerability responsibly, you pay them some money usually depending on how severe or how dangerous the vulnerability is and encourage good behavior this way. I think it's a really great way because it actually creates a lot of diversity. Typically, bug bounty programs attract a lot of different types of researchers. So we have different ways of looking at your code and that often discovers vulnerabilities that no one has ever thought of because no one really had that way of thought, so I think it's a really good thing.
It also awards people that responsibly disclose and don't just sell it to the highest bidder because we do have companies out there that buy vulnerabilities that then end up in some strange gray market, exactly what we don't want, so I think that's a really good thing. Bug bounty programs are complimentary to what we call penetration testing, where you hire a company that for money, starts looking at your software. There's no guarantee that they find a bug, but they usually have a systematic way of going over this and you have an agreement. As I said, I don't think there's a single silver bullet, a single way to make this, but I think this is a great way to actually also reward this. And some of the bug bounty researchers make a lot of money. They actually make a living of that. If you're really good, you can make a decent amount of money.
DUSTIN CHILDS: Yeah, and let me just add on to that as someone who runs a bug bounty program. There are a couple of different types of bug bounty programs too, and the most common one is the vendor specific one. So Microsoft buys Microsoft bugs, Apple buys Apple bugs, Google buys Google bugs. Then there's the ones that are like us. We're vendor-agnostic. We buy Microsoft and Apple and Google and Dell and everything else pretty much in between.
And one of the biggest things that we do as a vendor-agnostic program is an individual researcher might not have a lot of sway when they contact a big vendor like a Microsoft or a Google, but if they come through a program like ours or other vendor-agnostic programs out there, they know that they have the weight of the Zero Day Initiative or that program behind it, so when the vendor receives that report, they know it's already been vetted by a program and it's already been looked at. So it's a little bit like giving them a big brother that they can take to the schoolyard and say, "Show me where the software hurt you," and then we can help step in for that.
ALI WYNE: And Dustin, you've told us what bug bounty programs are. Why would someone want to participate in that program?
DUSTIN CHILDS: Well, researchers have a lot of different motivations, whether it's curiosity or just trying to get stuff fixed, but it turns out money is a very big motivator pretty much across the spectrum. We all have bills to pay, and a bug bounty program is a way to get something fixed and earn potentially a large amount of money depending on the type of bug that you have. The bugs I deal with range anywhere between $150 on the very low end, up to $15 million for the most severe zero click iPhone exploits being purchased by government type of thing, so there's all points in between too. So it's potentially lucrative if you find the right types of bugs, and we do have people who are exclusively bug hunters throughout the year and they make a pretty good living at it.
ALI WYNE: Duly noted. So maybe I'm playing a little bit of a devil's advocate here, but if vulnerabilities, these cyber vulnerabilities, if they usually arise from errors in code or other technology mistakes from companies, aren't they principally a matter of industry responsibility? And wouldn't the best prevention just be to regulate software development more tightly and avoid these mistakes from getting out into the world in the first place?
DUSTIN CHILDS: Oh, you used the R word. Regulation, that's a big word in this industry. So obviously it's less expensive to fix bugs in software before it ships than after it ships. So yes, obviously it's better to fix these bugs before they reach the public. However, that's not really realistic because like I said, every software has bugs and you could spend a lifetime testing and testing and testing and never root them all out and then never ship a product. So the industry right now is definitely looking to ship product. Can they do a better job? I certainly think they can. I spent a lot of money buying bugs and some of them I'm like, "Ooh, that's a silly bug that should never have left wherever shipped at." So absolutely, the industry needs to do better than what they have been doing in the past, but it's never going to be a situation where they ship perfect code, at least not with our current way of developing software.
ALI WYNE: Obviously there isn't any silver bullet when it comes to managing these vulnerabilities, disclosing these vulnerabilities. So assuming that we probably can't eliminate all of them, how should organizations deal with fixing these issues when they're discovered? And is there some kind of coordinated vulnerability disclosure process that organizations should follow?
DUSTIN CHILDS: There is a coordinated disclosure process. I mean, I've been in this industry for 25 years and dealing with vulnerability disclosures since 2008 personally, so this is a well-known process where you report to it. As an industry if you're developing software, one of the most important things you can do is make sure you have a contact. If someone finds a bug in your program, who do they email? The more established programs like Microsoft and Apple and Google, it's very clear if you find a bug there who you're supposed to email and what you're supposed to do with it. One of the problems we have as a bug bounty program is if we purchase a bug in a lesser known piece of software, sometimes it's hard for us to hunt down who actually is responsible for maintaining it and updating it.
We've even had to go on to Twitter and LinkedIn to try and hunt down some people to respond to an email to say, "Hey, we've got a bug in your program," so that's one of the biggest things you can do is just be aware that somebody could report a bug to you. And as a consumer of the product, however, you need a patch management program. So you can't just rely on automatic updates. You can't just rely on things happening automatically or easily. You need to understand first what is in your environment, so you have to be ruthless in your asset discovery, and I do use the word ruthless there intentionally. You've got to know what is in your enterprise to be able to defend it, and then you've got to have a plan for managing it and patching it. That's a lot easier said than done, especially in a modern enterprise where not only do you have desktops and laptops, you've got IT devices, you've got IOT devices, you've got thermostats, you've got update, you've got little screens everywhere that need updating and they all have to be included in that patch management process.
ALI WYNE: Serge, when it comes to triaging vulnerabilities, it doesn't sound like there's a large need for government participation. So what are some of the reasons legitimate and maybe less than legitimate why governments might increasingly want to be notified about vulnerabilities even before patches are available? What are their motivations?
SERGE DROZ: So I think there are several different motivations that governments are getting increasingly fed up with these kind of excuses that our industry, the software industry makes about how hard it is to avoid software vulnerabilities, all the reasons and excuses we bring and for not doing our jobs. And frankly, as Dustin said, we could be doing better. Governments just want to know so they can actually give out the message that, "Hey, we're watching you and we want to make sure you do your job." Personally, I'm not really convinced this is going to work. So that will be mostly the legitimate reasons why the governments want to know about vulnerabilities. I think it's fair that the government knows or learns about the vulnerability after the fact, just to get an idea of what the risk is for the entire industry. Personally, I feel it should only be the parties that need to know should know it during the responsible disclosure.
And then of course, there's governments that like vulnerabilities because they can abuse it themselves. I mean, governments are known to exploit vulnerabilities through their favorite three letter agencies. That's actually quite legitimate for governments to do. It's not illegal for governments to do this type of work, but of course, as a consumer or as an end user, I don't like this, I don't want products that have vulnerabilities that are exploited. And personally from a civil society point of view, there's just too much risk with this being out there. So my advice really is the fewer people, the few organizations know about a vulnerability the better.
DUSTIN CHILDS: What we've been talking about a lot so far is what we call coordinated disclosure, where the researcher and the vendor coordinate a response. When you start talking about governments though, you start talking about non-disclosure, and that's when people hold onto these bugs and don't report them to the vendor at all, and the reason they do that is so that they can use them exclusively. So that is one reason why governments hold onto these bugs and want to be notified is so that they have a chance to use them against their adversaries or against their own population before anyone else can use them or even before it gets fixed.
ALI WYNE: So the Cybersecurity Tech Accord had recently released a statement opposing the kinds of reporting requirements we've been discussing. From an industry perspective, what are the concerns when it comes to reporting on vulnerabilities to governments?
DUSTIN CHILDS: Really the biggest concern is making sure that we all have an equitable chance to get it fixed before it gets used. If a single government starts using vulnerabilities to exploit for their own personal gain, for whatever, that puts the rest of the world at a disadvantage, and that's the rest of the world, their allies as well as their opponents. So we want to do coordinated disclosure. We want to get the bugs fixed in a timely manner, and keeping them to themselves really discourages that. It discourages finding bugs, it discourages reporting bugs. It really discourages from vendors from fixing bugs too, because if the vendors know that the governments are just going to be using these bugs, they might get a phone call from their friendly neighborhood three letter and say, "You know what? Hold off on fixing that for a while." Again, it just puts us all at risk, and we saw this with Stuxnet.
Stuxnet was a tool that was developed by governments targeting another government. It was targeting Iranian nuclear facilities, and it did do damage to Iranian nuclear facilities, but it also did a lot of collateral damage throughout Europe as well, and that's what we're trying to avoid. It's like if it's a government on government thing, great, that's what governments do, but we're trying to minimize the collateral damage from everyone else who was hurt by this, and there really were a lot of other places that were impacted negatively from the Stuxnet virus.
ALI WYNE: And Serge, what would you say to someone who might respond to the concerns that Dustin has raised by saying, "Well, my government is advanced and capable enough to handle information about vulnerabilities responsibly and securely, so there's no issue or added risk in reporting to them." What would you say to that individual?
SERGE DROZ: The point is that there are certain things that really you only deal on a need to know basis. That's something that governments actually do know. Governments when they deal with confidential or critical information, it's always on the need to know. They don't tell this to every government employee even though they're, of course, are loyal. It makes the risk of this leaking even if the government doesn't have any ill intent bigger, so there's just no need the same way there is no need that all the other a hundred thousand security researchers need to know about this. So I think as long as you cannot contribute constructively to mitigating this vulnerability, you should not be part of that process.
Having said that, though, there is some governments that actually have really tried hard to help researchers making contact with vendors. Some researchers are afraid to report vulnerabilities because they feel they're going to become under pressure or stuff like this. So if a government wants to take that role and can or can't create enough trust that researchers trust them, I don't really have a problem, but it should not be mandatory. Trust needs to be earned. You cannot legislate this, and every time you have to legislate something, I mean, come on, you legislate it because people don't trust you.
ALI WYNE: We spent some time talking about vulnerabilities, why they're a problem. We've discussed some effective and maybe some not so effective ways to prevent or manage them better. And I think the governments have a legitimate interest in knowing the companies are acting responsibly and that, that interest is the impetus behind some of the push, at least for more regulation and reporting. But what do each of you see sees other ways that governments could help ensure that companies are mitigating risks and protecting consumers as much as possible?
DUSTIN CHILDS: So one of the things that we're involved with here at the Zero Day Initiative is encouraging governments to allow safe harbor. And really what that means is researchers are safe in reporting vulnerabilities to a vendor without the legal threat of being sued or having other action taken against them so that as long as they are legitimately reporting a bug and not trying to steal or violate laws, as long as they're legitimate researchers trying to get something fixed, they're able to do that without facing legal consequences.
One of the biggest things that we do as a bug bounty program is just handle the communications between researchers and the vendors, and that is really where it can get very contentious. So to me, one of the things that governments can do to help is make sure that safe harbor is allowed so that the researchers know that, "I can report this vulnerability to this vendor without getting in touch with a lawyer first. I'm just here trying to get something fixed. Maybe I'm trying to get paid as well," so maybe there is some monetary value in it, but really they're just trying to get something fixed, and they're not trying to extort anyone. They're not trying to create havoc, they're just trying to get a bug fixed, and that safe harbor would be very valuable for them. That's one thing we're working on with our government contacts, and I think it's a very big thing for the industry to assume as well.
SERGE DROZ: Yes, I concur with Dustin. I think the job of the government is to create an environment in which responsible vulnerability disclosure is actually possible and is also something that's desirable, that also includes a regulatory framework that actually gets away from this blaming. I mean, writing software is hard, bugs appear. If you just constantly keep bashing people that they're not doing it right or you threaten them with liabilities, they're not going to talk to you about these types of things. So I think the job of the government is to encourage responsible behavior and to create an environment in that, and maybe there's always going to be a couple of black sheeps, and here maybe the role of the government is really to encourage them to play along and start offering vulnerability reporting programs. That's where I see the role of the government, creating good governance to actually enable responsible vulnerabilities disclosure.
ALI WYNE: Dustin Childs, Head of Threat Awareness at the Zero Day Initiative at Trend Micro, a cybersecurity firm base in Japan. And Serge Droz from the Forum of Incident Response and Security Teams, a community of IT security teams that respond when there is a major cyber crisis. Dustin, Serge, thanks very much for joining me today.
DUSTIN CHILDS: You're very welcome. Thank you for having me.
SERGE DROZ: Yes, same here. It was a pleasure.
ALI WYNE: That's it for this episode of Patching the System. We have five episodes this season covering everything from cyber mercenaries to a cybercrime treaty. So follow Ian Bremmer's GZERO World feed anywhere you get your podcast to hear more. I'm Ali Wyne. Thanks very much for listening.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
Podcast: Would the proposed UN Cybercrime Treaty hurt more than it helps?
Listen: As the world of cybercrime continues to expand, it follows suit that more international legal standards should follow. But while many governments around the globe see a need for a cybercrime treaty to set a standard, a current proposal on the table at the United Nations is raising concerns among private companies and nonprofit organizations alike. There are fears it covers too broad a scope of crime and could fail to protect free speech and other human rights across borders while not actually having the intended effect of combatting cybercrime.
In season 2, episode 4 of Patching the System, we focus on the international system of online peace and security. In this episode, we hear about provisions currently included in the proposed Russia-sponsored UN cybercrime treaty as deliberations continue - and why they might cause more problems than they solve.
Our participants are:
- Nick Ashton-Hart, head of delegation to the Cybercrime Convention Negotiations for the Cybersecurity Tech Accord
- Katitza Rodriguez, policy director for global privacy at a civil society organization, the Electronic Frontier Foundation
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
TRANSCRIPT: Would the proposed UN Cybercrime Treaty hurt more than it helps?
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
NICK ASHTON HART: We want to actually see a result that improves the situation for real citizens that actually protects victims of real crimes and that doesn't allow //cybercrime to go unpunished. That's in no one's interest.
KATITZA RODRIGUEZ: By allowing countries to set their own standards of what constitutes a serious crime, the states are opening the door for authoritarian countries to misuse this treaty as a tool for persecution. The treaty needs to be critically examined and revised to ensure that it's truly served its purpose in tackling cybercrimes without undermining human rights.
ALI WYNE: It's difficult to overstate the growing impact of international cybercrime. Many of us either have been victims of criminal activity online or know someone who has been.
Cybercrime is also a big business, it's one of the top 10 risks highlighted in the World Economic Forum's 2023 Global Risk Report, and it's estimated that it could cost a world more than $10 trillion by 2025. Now, global challenges require global cooperation, but negotiations of a new UN Cybercrime Treaty have been complicated by questions around power, free speech and privacy online.
Welcome to Patching the System, a special podcast from the Global Stage series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from more than 150 global technology companies dedicated to creating a safer cyber world for all of us.
In this episode, we'll explore the current draft of what would be the first United Nations Cybercrime Treaty, the tense negotiations behind the scenes, and the stakes that governments and private companies have in those talks.
Last season we spoke about the UN Cybercrime Treaty negotiations when they were still relatively early on in the process. While they had been kicked off by a Russia-sponsored resolution that passed in 2019, there had been delays due to COVID-19.
In 2022, there was no working draft and member states were simply making proposals about what should be included in a cybercrime treaty, what kinds of criminal activity it should address, and what kinds of cooperation it should enable.
Here's Amy Hogan-Burney of the Microsoft Digital Crimes Unit speaking back then:
AMY HOGAN-BURNEY: There is a greater need for international cooperation because as cyber crime escalates, it’s clearly borderless and it clearly requires both public sector and the private sector to work on the problem. Although I am just not certain that I think that a new treaty will actually increase that cooperation. And I’m a little concerned that it might do more harm than good. And so, yes, we want to be able to go after cyber criminals across jurisdiction. But at the same time, we want to make sure that we’re protecting fundamental freedoms, always respectful of privacy and other things. Also, we’re always mindful of authoritarian states that may be using these negotiations to criminalize content or freedom of expression.
Now a lot has happened since then as we've moved from the abstract to the concrete. The chair of the UN Negotiating Committee released a first draft of the potential new cybercrime treaty last June, providing the first glimpse into what could be new international law and highlighting exactly what's at stake. The final draft is expected in November with the diplomatic conference to finalize the text starting in late January 2024.
Joining me are Nick Ashton-Hart, head of delegation to the Cybercrime Convention Negotiations for the Cybersecurity Tech Accord and Katitza Rodriguez, policy director for global privacy at a civil society organization, the Electronic Frontier Foundation. Thanks so much for speaking with me today.
KATITZA RODRIGUEZ: Thank you for inviting us.
NICK ASHTON-HART: It's a pleasure to be here.
ALI WYNE: Let's dive right into the cybercrime treaty. Now, this process started as a UN resolution sponsored by Russia and it was met early on by a lot of opposition from Western democracies, but there were also a lot of member states who genuinely thought that it was necessary to address cybercrime. So give us the broad strokes as to why we might want a cybercrime treaty?
NICK ASHTON-HART: The continuous expansion of cybercrime at an explosive growth rate is clearly a problem and one that the private sector would like to see more effectively addressed because of course, we're on the front lines of addressing it as victims of it. At one level it sounds like an obvious candidate for international action.
In reality, of course, there is the Budapest Convention on cybercrime, which was agreed in 2001. It is not just a convention that European countries can join, any member state can join. If there hadn't been any international convention, then you could see how it would be an obvious thing to work on.
This was controversial from the beginning because there is one and it's widely implemented. I think it's 68 countries, but 120 countries' laws have actually been impacted by the convention. There was also a question because of who was asking for it. This also raised more questions than answers.
KATITZA RODRIGUEZ: For us, civil society, I don't think the treaty is necessary because there are other international treaties, but I do understand why some states are trying to push for this treaty because they feel that their system for law enforcement cooperation is just too slow or not reliable. And they have argued that they have not been able to set up effective mutual legal assistance treaties, but we think the reasons fall short, especially because there are lot of these existing mechanisms include solid human rights safeguards, and when the existing mutual legal assistance treaty for international cooperation does not work well, we believe they can be improved and fixed.
And just let's be real, there are some times when not cooperating is actually the right thing to do, especially when criminal investigations could lead to prosecution of individuals for their political belief, their sexual or protection, gender identity or simply for speaking out of protesting peacefully or insulting the president or the king.
On top of that, this treaty as is stand now, might not even make the cybercrime cooperation process any faster. The negotiators are aiming for mandatory cooperation of almost all crimes on this planet and not just cybercrimes. This could end up bogging down the system even more.
ALI WYNE: Nick, let me just ask you, are there any specific aspects of a new global cybercrime treaty that you think could be genuinely helpful to citizens around the world?
NICK ASHTON-HART: Well, if for one it focused only on cybercrime, that would be the most fundamental issue. The current trajectory would have this convention address all crimes of any kind, which is clearly an ocean boiling exercise and creates many more problems than it solves. There are many developing countries who will say, as Katitza has noted, that they don't receive timely law enforcement cooperation through the present system because if you are not a part of the Budapest Convention, honestly you have to have a bilateral treaty relationship with every country that you want to have law enforcement cooperation with.
And clearly, every country negotiating a mutual legal assistance treaty with 193 others is not a recipe for an international system that's actually effective. That's where an instrument like this can come in and set a basic common set of standards so that all parties feel confident that the convention’s provisions will not be taken advantage of for unacceptable purposes.
ALI WYNE: Katitza, I want to bring you back into the conversation. On balance, what do you think of the draft of the treaty as it stands now as we approach the end of 2023?
KATITZA RODRIGUEZ: Honestly, I'm pretty worried. The last negotiation session in New York made it crystal clear that we're short of time and there is still a lot left undecided, especially on critical issues like defining the treaty scope and ensuring human rights are protected.
The treaty was supposed to tackle cybercrime, but it's morphing into something much broader, a general purpose surveillance tool that could apply to any crime, tech involvement or not, as long as there is digital evidence. We're extremely far from our original goal and opening a can of worms. I agree with Nick when he said that a treaty with a tight focus on just actual cybercrimes topped with solid human right protections could really make a difference. But sadly what we are seeing right now is very far from that.
Many countries are pushing for sweeping surveillance powers, hoping to access real-time location data and communication for a wide array of crimes with minimum legal safeguards, the check and balance to put limits to curb potential abuse of power. This is a big red flag for us.
On the international cooperation front, it's a bit of a free for all the treaty leaves it up to individual countries to set their own standards for privacy and human rights when using these surveillance powers in cross border investigations.
And we know that the standards of some countries are very far from minimal standards, yet every country that signs a treaty is expected to implement these cross-border cooperation powers. And here's where it gets really tricky. This sets a precedent for international cooperation on investigations, even into activities that might be considered criminal in one country but are actually forms of free expression. This includes laws against so-called fake news, peaceful protests, blasphemy, or expressing non-conforming sexual orientation or gender identity. These are matters of human rights.
ALI WYNE: Nick, from your perspective, what are the biggest concerns for industry right now with the text, with the negotiations as they're ongoing? What are the biggest concerns for industry and is there any major provision that you think is missing right now from the current text?
NICK ASHTON-HART: Firstly, I will say that industry actually agrees with everything you just heard from EFF. And that's one of the most striking things about this negotiation, is in more than 25 years of working in multilateral policy, I have never seen all NGOs saying the same thing to the extent that is the case in this negotiation. Across the board, we have the same concerns. We may emphasize some more than others or put a different level of emphasis on certain things, but we all agree comprehensively, I think, about the problems.
One thing that's very striking is this is a convention which is fundamentally about the sharing of personal information about real people between countries. There is no transparency at all at any point. In fact, the convention repeatedly says that all of these transfers of information should be kept secret.
This is the reality that they are talking about agreeing to, is a convention where countries globally share the personal information of citizens with no transparency at all. Ask yourself if that is a situation which isn't likely to be abused, because I think we know the answer. It's the old joke about you know who somebody is if you put them in a room and turn the lights off. Well, the lights are off and the light switch doesn't exist in this treaty.
And so that, to us, is simply invidious in 2024 that you would see that bearing the UN logo - it would be outrageous. And that's just the starting place. There's also provisions that would allow one country to ask another to seize the person of say a tech worker who is on holiday, or a government worker who is traveling that has access to passwords of secure systems, to seize that person and demand that that person turn over those codes with no reference back to their employer.
As Katitza has said, it also allows for countries to ask others to provide the location data and communication metadata about where a person is in real time along with real time access to their computer information. This is clearly subject to abuse, and we brought this up with some delegations and they said, "Well, but countries do this already, so do we have to worry about it?"
I just found that an astonishing level of cynicism: the fact that people abuse international law isn't an argument for trying to limit their ability to do it in this context. We have a fundamental disconnect where we're asking to trust all countries in the world to operate in the dark, in secret, forever and that that will work out well for human rights.
ALI WYNE: Katitza, let me bring you back into the conversation. You heard Nick's assessment. I'd like to ask you to react to that assessment and also to follow up with you, do you think that there are any critical provisions that need to be added to the current text of the draft treaty?
KATITZA RODRIGUEZ: Well, I agree on many of the points that Nick made. One, keeping a sharp focus on so-called cybercrimes, is not only crucial for protecting human rights, our point of view, but it's also key to making this whole cooperation work. We have got countries left and right pointing out the flaws in the current international cooperation mechanisms, saying they are too flawed, too complex. And yet here we are heading towards a treaty that could cover a limitless list of crimes. That's not just missing the point, it's setting us up for even more complexity when the goal should be working better together, easier to tackle this very serious crimes like ransomware attacks that we have seen everywhere lately.
There is a few things that are also very problematic that are more into the details. One is one that Nick mentioned, this provision that could be used to coerce individual engineers, people who have knowledge to be able to access systems, to compel them to bypass their own security measures or the measures of their own employees, without the company actually knowing and putting the engineer into trouble because it won't be able to tell their employer that they are working on behalf of the law enforcement. I think it's really Draconian, these provisions, and it's also very bad for security, for encryption, for keeping us more safe.
But there's another provision that is also very problematic for us. It's the one that on international cooperation too, when it mentions that states should share, "Items or data required for analysis of investigations." The way it's phrased, it is very vague and leaves room for a state's ability to share entire databases or artificial intelligence trainings data to be shared. This could include biometrics data, data that is very sensitive and it's a human rights minefield here. We have seen how biometric data, face and voice recognition can be used against protestors, minorities, journalists, and migrants in certain countries. This treaty shouldn't become a tool that facilitates such abuses on an international scale.
And we also know that Interpol, in the mix too, is developing this massive predictive analytic system fed by all sorts of data, but it will be also with information data provided by member states. The issue with predictive policing is that it's often pitched as unbiased since it's based on data and not personal data, but we know that's far from the truth. It's bound to disproportionately affect Black and other over-policed communities. The data feeds into these systems comes from a racially biased criminal punishment systems and arrests in Black neighborhoods are disproportionately high. Even without explicit racial information, the data is tainted.
One other one:Human rights safeguards in the treaty as Nick says, they're in secret and the negotiation, no transparency, we fully agreed on that, but they are very weak.
As it stands, the main human rights safeguards in the treaty don't even apply to the international co-operation chapter, which is a huge gap. It defers to national law, whatever national law says, and as I said before, for one country this is good and for others it's bad and that's really problematic.
ALI WYNE: Nick, in terms of the private sector and in terms of technology companies, what are the practical concerns when it comes to potential misuses or abuses of the treaty from the perspective specifically of the Cybersecurity Tech Accord?
NICK ASHTON-HART: In the list of criminal acts in the convention, at the present time, none of them actually require criminal intent, but that is not actually the case at the moment. The criminal acts are only defined as "Acts done intentionally without right." This opens the door for all kinds of abuses. For example, security researchers often attempt to break into systems in order to find defects that they can then notify the vendors of, so these can be fixed. This is a fundamentally important activity for the security of all systems globally. They are intentionally breaking into the system but not for a negative purpose, for an entirely positive one.
But the convention does not recognize how important it is not to criminalize security researchers. The Budapest Convention, by contrast, actually does this. It has very extensive notes on the implementation of the convention, which are a part of the ratification process, meaning countries should not only implement the exact text of the convention, but they should do so in a rule of law-based environment that does, among other things, protect security researchers.
We have consistently said to the member states, "You need to make clear that criminal intent is the standard." The irony here is this is actually not complicated because this is a fundamental concept of criminal law called mens rea, which says that with the exception of certain crimes like murder, for someone to be convicted, you have to find that they had criminal intent.
Without that, you have the security researchers’ problem. You also have the issue that whistleblowers are routinely providing information that they're providing without authorization, for example, to journalists or also to watchdog agencies of government. Those people would also fall foul of the convention as its currently written, as would journalists' sources, depending on the legal environment in which they're implemented. Like civil society, we have consistently pointed out these glaring omissions and yet no country including the developed Western countries that you would expect would seize upon this, none of them have moved to include protections for any of these situations.
I have to say that's one of the most disappointing things about this negotiation is so far most of the Western democracies are not acting to prevent abuses of this convention and they are resisting any efforts from all of us in civil society and the private sector urging them to take action and they're refusing to do so. There are two notable exceptions which is New Zealand and Canada, but the rest, frankly, are not very helpful.
Some of the other issues that we have is that it should be much clearer that if there's a conflict of law problem where a country asks for cooperation of a provider and the provider says to them, "Look, if we provide this information to you, it's coming from another jurisdiction and it would cause us to break the law in that jurisdiction." We have repeatedly said to the member states, "You need to provide for this situation because it happens routinely today and in such an instance it's up to the two cooperating states to work out between themselves how that data can be provided in a way that does not require the provider to break the law."
If you want to see more effective cooperation and more expeditious cooperation, you would want more safeguards, as Katitza has mentioned. There's a direct connection between how quickly cooperation requests go through and the level of safeguards and comfort with the legal system of the requesting and requested states.
Where a request goes through quickly, it's because the states both see that their legal systems are broadly compatible in terms of rights and the treatment of accused persons and appeals and the like. And so they not only see that the crimes are the same, called dual criminality, but that also the accused will be treated in a way that's broadly compatible with the home jurisdiction. And so there's a natural logic to saying, "Since we know this is the case, we should provide for this in here and ensure robust safeguards because that will produce the cooperation that everyone wants." Unfortunately, the opposite is the case. The cooperation elements continue to be weakened by poor safeguards.
ALI WYNE: I think that both of you have made clear that the stakes are very high for whether this treaty comes to pass, what will the final text be? What will the final provisions be? But just to put a fine point on it, are there concerns that this treaty could also set a precedent for future cybercrime legislation across jurisdictions? I can imagine this treaty serving as a north star in places that don't already have cybercrime laws in place, so Katitza, let me begin with you.
KATITZA RODRIGUEZ: Yes, your are concerns and indeed very valid and very pressing. By setting a precedent where broad intrusive surveillance tools are made available for an extensive range of crimes, we risk normalizing a global landscape where human rights are secondary to state surveillance and control. Law enforcement needs ensured access to data, but the check and balances and the safeguards is to ensure that we can differentiate between the good cops and the bad cops. The treaty provides a framework that could empower states to use the guise of cybercrime prevention to clamp down on activities that are protected under human right law.
And I think that this broad approach not only diverts valuable resources and attention away for tackling genuine cybercrimes, but also offers – and here to answer your question - an example for future legislation that could facilitate this repressive state's practice. It sends a message that this is acceptable to use invasive surveillance tools to gather evidence for any crime deemed serious by a particular country irrespective of the human rights implications. And that's wrong.
By allowing countries to set their own standards of what constitutes a serious crime, the states are opening the door for authoritarian countries to misuse this treaty as a tool for persecution. The treaty needs to be critically examined and revised to ensure that it's truly served its purpose in tackling cybercrimes without undermining human rights. The stakes are high and I know it's difficult, but we're talking about the UN and we're talking about the UN charter. The international community must work together to ensure that they can protect security and also fundamental rights.
NICK ASHTON-HART: I think Katitza has hit the nail on the head, and there's one particular element I'd like to add to this is something like 40% of the world's countries at the moment either do not have cybercrime legislation or are revising quite old cybercrime legislation. They are coming to this convention, they've told us this, they've coming to this convention because they believe this can be the forcing mechanism, the template that they can use in order to ensure that they get the cooperation that they're interested in.
So the normative impact of this convention would be far greater than in a different situation, for example, where there was already a substantial level of legislation globally and it had been in place in most countries for a long enough period for them to have a good baseline of experience in what actually works in prosecuting cybercrimes and what doesn't.
But we're not in that situation. We're pretty much in the opposite situation and so this convention will have a disproportionately high impact on legislation in many countries because with the technical assistance that will come with it, it'll be the template that is used. Knowing that that is the case, we should be even more conservative in what we ask this convention to do and even more careful to ensure that what we do will actually help prosecute real cybercrimes and not facilitate cooperation on other crimes.
This makes things even more concerning for the private sector because of this. We want to actually see a result that improves the situation for real citizens that actually protects victims of real crimes and that doesn't allow as is unfortunately the case here, even large-scale cybercrime to go unpunished. That's in no one's interest, but this convention will not actually help with that. At this point we would have to see it as net harmful to that objective, which is supposed to be a core objective.
ALI WYNE: We've discussed quite extensively the need for international agreements when it comes to cybercrime. We've also mentioned some of the concerns about the current deal on the table. Nick, what would you need to see to mitigate some of the concerns that you have about the current deal on the table?
NICK ASHTON-HART: The convention should be limited to the offenses that it contains. Its provisions should not be available for any other criminal activity or cooperation. That would be the starting place. The second thing would be to inscribe crimes that are self-evidently criminal through providing for mens rea in all the articles to avoid the problems with whistleblowers, and journalists and security researchers. There should be a separate commitment that the provisions of this convention do not apply to actors acting in good faith to secure systems such as those that have been described. There must be, we think, transparency. There is no excuse for a user not to be notified at the point that the offense for which their data was accessed has been adjudicated or the prosecution abandoned and that should be explicitly provided.
People have a right to know what governments are doing with their personal information. We think it should be much clearer what dual criminality is. It should be very straightforward that without dual criminality, no cooperation under the convention will take place so that requests go through more quickly. It's much more clear that it is basically the same crime in all the cooperating jurisdictions. I would say those were the most important.
ALI WYNE: Katitza, you get the last word. What would you need to see to mitigate some of the concerns that you've expressed in our conversation about the current draft text on the table?
KATITZA RODRIGUEZ: First of all, we need to rethink how we handle refusals for cross border investigations. The treaty is just too narrow here, offering barely any room to say no. Even when the request to cooperate violates, or is inconsistent with human rights law. We need to make dual criminality a must to invoke the international cooperation powers, as Nick says. This dual criminality principle is a safeguard. That means that if it is not a crime in both countries involved, the treaty shouldn't allow for any assistance. You also need clear mandatory human rights safeguards in all international cooperation, that are robust - with notification, transparency, oversight mechanisms. Countries need to actively think about potential human regulations before using these powers.
It also helps if we only allow cooperation for genuine cybercrimes like real core cybercrimes, and not just any crime involving a computer, or that is generating electronic evidence, which today even the electronic toaster could leave digital evidence.
I just want to conclude by saying actual cybercrime investigations are often highly sophisticated and there's a case to be made for an international effort focused on investigating those crimes, but including every crime under the sun in its scope and sorry, it's really a big problem.
This treaty fails to create that focus. The second thing it also fails to provide these safeguards for security researchers, which Nick explained. We’re fully agreed on that. Security researchers are the ones who make our systems safe. Criminalizing what they do and not providing the effective, safeguards, it really contradicts the core aim of the treaty, which is actually to make us more secure to fight cybercrime. So we need a treaty that it's narrow on the scope and protects human rights. The end result however, is a cybercrime treaty that may well do more to undermine cybersecurity than to help it.
ALI WYNE: A really thought-provoking note on a which to close. Nick Ashton-Hart, head of delegation to the cybercrime convention negotiations for the Cybersecurity Tech Accord and Katitza Rodriguez, policy director for global privacy at A Civil Society Organization, the Electronic Frontier Foundation. Nick, Katitza, thank you so much for speaking with me today.
NICK ASHTON-HART: Thanks very much. It's been a pleasure.
KATITZA RODRIGUEZ: Thanks for having me on. Muchas gracias. It was a pleasure.
ALI WYNE: That's it for this episode of Patching the System. Catch all of the episodes from this season, exploring topics such as cyber mercenaries and foreign influence operations by following Ian Bremmer's GZERO World feed anywhere you get your podcasts. I'm Ali Wyne, thanks for listening.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
- Podcast: Foreign influence, cyberspace, and geopolitics ›
- Podcast: Cyber mercenaries and the global surveillance-for-hire market ›
- Podcast: How cyber diplomacy is protecting the world from online threats ›
- Attacked by ransomware: The hospital network brought to a standstill by cybercriminals ›
- Hacked by Pegasus spyware: The human rights lawyer trying to free a princess ›
- Podcast: Can governments protect us from dangerous software bugs? - GZERO Media ›
Podcast: Foreign influence, cyberspace, and geopolitics
Listen: Thanks to advancing technology like artificial intelligence and deep fakes, governments can increasingly use the online world to spread misinformation and influence foreign citizens and governments - as well as citizens at home. At the same time, governments and private companies are working hard to detect these campaigns and protect against them while upholding ideals like free speech and privacy.
In season 2, episode 3 of Patching the System, we're focusing on the international system of bringing peace and security online. In this episode, we look at the world of foreign influence operations and how policymakers are adapting.
Our participants are:
- Teija Tiilikainen, Director of the European Center of Excellence for Countering Hybrid Threats
- Clint Watts, General Manager of the Microsoft Threat Analysis Center
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
TRANSCRIPT: Foreign Influence, Cyberspace, and Geopolitics
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
Teija Tiilikainen: What the malign actors are striving at is that they would like to see us starting to compromise our values, so question our own values. We should make sure that we are not going to the direction where the malign actors would want to steer us.
Clint Watts: From a technical perspective, influence operations are detected by people. Our work of detecting malign influence operations is really about a human problem powered by technology.
Ali Wyne: When people first heard this clip that circulated on social media, more than a few were confused, even shocked.
AI VIDEO: People might be surprised to hear me say this, but I actually like Ron DeSantis a lot. Yeah, I know. I'd say he's just the kind of guy this country needs, and I really mean that.
Ali Wyne: That was not Hillary Clinton. It was an AI-generated deepfake video, but it sounded so realistic that Reuters actually investigated it to prove that it was bogus. Could governments use techniques such as this one to spread false narratives in adversary nations or to influence the outcomes of elections? The answer is yes, and they already do. In fact, in a growing digital ecosystem, there are a wide range of ways in which governments can manipulate the information environment to push particular narratives.
Welcome to Patching the System, a special podcast from the Global Stage series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us. Today, we're looking at the growing threat of foreign influence operations, state-led efforts to misinform or distort information online.
Joining me now are Teija Tiilikainen, Director of the European Center of Excellence for Countering Hybrid Threats, and Clint Watts, General Manager of the Microsoft Threat Analysis Center. Teija, Clint, welcome to you both.
Teija Tiilikainen:Thank you.
Clint Watts: Thanks for having me.
Ali Wyne: Before we dive into the substance of our conversation, I want to give folks an overview of how both of your organizations fit into this broader landscape. So Clint, let me turn to you first. Could you quickly explain what it is that Microsoft's Threat Analysis Center does, what its purpose is, what your role is there, and how does its approach now differ from what Microsoft has done in the past to highlight threat actors?
Clint Watts: Our mission is to detect, assess and disrupt malign influence operations that affect Microsoft, its customers and democracies. And it's quite a bit different from really how Microsoft has handled it up until we joined. We were originally a group called Miburo and we had worked in disrupting malign influence operations until we were acquired by Microsoft about 15 months ago.
And the idea behind it is we can connect what's happening in the information environment with what's happening in cyberspace and really start to help improve the information integrity and ecosystem when it comes to authoritarian nations that are trying to do malign influence attacks. Every day, we're tracking Russia, Iran, and China worldwide in 13 languages in terms of the influence operations they do. That's a combination of websites, social media hack and leak operations is a particular specialty of ours where we work with the Microsoft Threat Intelligence Center. They see the cyberattacks and we see the alleged leaks or influence campaigns on social media, and we can put those together to do attribution about what different authoritarian countries are doing or trying to do to democracies worldwide.
Ali Wyne: Teija, I want to pose the same question to you because today might be the first that some of our listeners are hearing of your organization. What is the role of the European Center of Excellence for Countering Hybrid Threats?
Teija Tiilikainen: So this center of excellence is an intergovernmental body that has been established originally six years ago by nine governments from various EU and NATO countries, but today covers 35 governments. So we cover 35 countries, EU and NATO allies, all, plus that we cooperate closely with the European Union and NATO. Our task is more strategic, so we try to analyze the broad hybrid threat activity and with hybrid threats, we are referring to unconventional threat forms, election interference, attacks against critical infrastructures, manipulation of the information space, cyber and all of that. We create capacity, we create knowledge, information about these things and try to provide recommendations and share best practices among our governments about how to counter these threats, how to protect our societies.
Ali Wyne: We're going to be talking about foreign influence operations throughout our conversations, but just first, let's discuss hybrid conflicts such as what we're seeing and what we have seen so far in Ukraine. And I'm wondering how digital tactics and conflicts have evolved. Bring us up to speed as to where we are now when it comes to these digital tactics and conflicts and how quickly we've gotten there.
Teija Tiilikainen: So it's easier to start with our environment where the societies rely more and more on digital solutions, information technology that is a part of our societies. We built that to the benefit of our democracies and their economies. But in this deep conflict where we are, conflict that has many dimensions, one of them being the one between democracies and authoritarian states, that changes the role of our digital solutions and all of a sudden we see how they have started to be exploited against our security and stability. So it is about our reliance on critical infrastructures, there we have digital solutions, we have the whole information space, the cyber systems.
So this is really a new - strengthening new dimension in the conflicts worldwide where we talk more and more about the need to protect our vulnerabilities and the vulnerabilities more and more take place in the digital space, in the digital solutions. So this is a very kind of technological instrument, more and more requiring advanced solutions from our societies, from us, very different understanding about threats and risks. If we compare that with the kind of more traditional one where armed attacks, military operations used to form the threat number one, and now it is about the resilience of our critical infrastructures, it is about the security and safety of our information solutions. So the world looks very different and ongoing conflicts do that as well.
Ali Wyne: And that word, Teija, that you use, resilience, I suspect that we're going to be revisiting that word and that idea quite a bit throughout our conversation. Clint, let me turn to you now and ask how foreign influence operations in your experience, how have they evolved online? How are they conducted differently today? Are there generic approaches or do different countries execute these operations differently?
Clint Watts: So to set the scene for where this started, I think the point to look at is the Arab Spring. When it came to influence operations, the Arab Spring, Anonymous, Occupy Wall Street, lots of different political movements, they all occurred roughly at the same time. We often forget that. That's because social media allowed people to come together around ideas, organize, mobilize, participate in different activities. That was very significant, I think, for nation states, but one in particular, which was Russia, which was a little bit thrown off by, let's say, the Arab Spring and what happened in Egypt, for example. But at the same point intrigued about what would the ability to be able to go into a democracy and infiltrate them in the online environment and then start to pit them against each other.
That was the entire idea of their Cold War strategy known as active measures, which was to go into any nation, particularly in the United States or any set of alliances, infiltrate those audiences and win through the force of politics rather than the politics of force. You can't win on the battlefield, but you can win in their political systems and that was very, very difficult to do in the analog era. Fast-forward to the social media age, which we saw the Russians do, was take that same approach with overt media, fringe media or semi-covert websites that look like they came from the country that was being targeted. And then combine that with covert troll accounts on social media that look like and talk like the target audience and then add the layer that they could do that no one else had really put together, which was cyberattacks, stealing people's information and timing the leaks of information to drive people's perceptions.
That is what really started about 10 years ago, and our team picked up on it very early in January 2014 around the conflict in Syria. They had already been doing it in the conflict in Ukraine 10 years ago, and then we watched it move towards the elections: Brexit first, the U.S. election, then the French and German elections. This is that 2015, '16, '17 period.
What's evolved since is everyone recognizing the power of information and how to use it and authoritarians looking to grab onto that power. So Iran has been quite prolific in it. They have some limitations, they have resource issues, but they still do some pretty complex information attacks on audiences. And now China is the real game changer. We just released our first East Asia report where we diagrammed what we saw as an incredible scaling of operations and centralization of it.
So looking at how to defend, I think it's remarkable that, like Teija mentioned, in Europe, a lot of countries that are actually quite small, Lithuania, for example, has been able to mobilize their citizens in a very organized way to help as part of the state's defense, come together with a strategy, network people to spot disinformation and refute it if it was coming from Russia.
In other parts of the world though, it's been much, much tougher, particularly even the United States where we've seen the Russians and other countries now infiltrate into audiences and you're trying to figure out how to build a coherent system around defending when you have a plurality of views and identities and different politics. It's actually somewhat more difficult, I think, the larger a country is to defend, particularly with democracies.
And then the other thing is you've seen the resilience and rebirth of alliances and not just on battlefields like NATO, but you see NATO, the EU, the Hybrid CoE, you see these groups organizing together to come through with definitions around terminology, what is a harm to democracy, and then how best to combat it. So it's been an interesting transition I would say over the last six years and you’re starting to see a lot of organization in terms of how democracies are going to defend themselves moving forward.
Ali Wyne: Teija, I want to come back to you. What impact have foreign influence operations had in the context of the war in Ukraine, both inside Ukraine but also globally?
Teija Tiilikainen: I think this is a full-scale war also in the sense that it is very much a war about narratives. So it is about whose story, whose narrative is winning this war. I must say that Russia has been quite successful with its own narrative if we think about how supportive the Russian domestic population still is with respect to the war and the role of the regime, of course there are other instruments also used.
Prior to the war started, Russia started to promote a very false story about what is going on in Ukraine. There was the argument about an ongoing Nazification of Ukraine. There was another argument about a genocide of the Russian minorities in Ukraine that was supposed to take place. And there was also a narrative about how Ukraine had become a tool in the toolbox of the Western alliance that is NATO or for the U.S. to exert its influence and how it was also used offensively against Russia. And these were of course all parts of the Russian information campaign – disinformation - with which it justified – legitimized - its war.
If we take a look at the information space in Europe or more broadly in Africa, for instance, today we see that the Western narrative about the real causes of the war, how Russia violated international law, the integrity and sovereignty of Ukraine, how this kind of real fact-based narrative is not doing that well. This proves the strength of foreign influence operations when they are strategic, well-planned, and of course when they are used by actors such as Russia and China that tend to cooperate and also outside the war zone, China is using this Russian narrative to put the blame for the war on the West and present itself as a reliable international actor.
So there were many elements in the war, not only the military activity, but I would in particular want to emphasize the role of these information operations.
Ali Wyne: It's sobering not only thinking about the impact of these disinformation operations, these foreign influence operations, but also, Teija, you mentioned the ways in which disinformation actors are learning from one another and I imagine that that trend is going to grow even more pronounced in the years and decades ahead. So thank you for that answer. Clint, from a technical perspective, what goes into recognizing information operations? What goes into investigating information operations and ultimately identifying who's responsible?
Clint Watts: I think one of the ironies of our work is, from a technical perspective, influence operations are detected by people. One of the big differences, especially we work with MSTIC, the cyber team and our team, is that our work of detecting malign influence operations is really about a human problem powered by technology. And that if you want to be able to understand and get your lead, we work more like a newspaper in many ways. We have a beat that we're covering, let's say, it's Russian influence operations in Africa. And we have real humans, people with master's degrees speak the language. I think the team speaks 13 languages in total amongst 28 of us. They sit and they watch and they get enmeshed in those communities and watch the discussions that are going on.
But ultimately we're using some technical skills, some data science to pick up those trends and patterns because the one thing that's true of influence operations across the board is you cannot influence and hide forever. Ultimately your position or, as Teija said, your narratives will track back to the authoritarian country - Russia, Iran, China - and what they're trying to achieve. And there's always tells common tells or context, words, phrases, sentences are used out of context. And then you can also look at the technical perspective. Nine years ago when we came onto the Russians, the number one technical indicator of Russian accounts was Moscow time versus U.S. time.
Ali Wyne: Interesting.
Clint Watts: They worked in shifts. They were posing as Americans, but talking at 2:00 AM mostly about Syria. And so it stuck out, right? That was a contextual thing. You move though from those human tips and insights, almost like a beat reporter though, to using technical tools. That's where we dive in. So that's everything from understanding associations of time zones, how different batches of accounts and social media might work in synchronization together, how they'll change from topic time and time again. The Russians are a classic of wanting to talk about Venezuela one day, Cuba the next, Syria the third day, the U.S. election the fourth, right? They move in sequence.
And so I think when we're watching people and training them, when they first come on board, it's always interesting. We try and pair them up in teams of three or four with a mix of skills. We have a very interdisciplinary set of teams. One will be very good in terms of understanding cybersecurity and technical aspects. Another one, a data scientist. All of them can speak a language and ultimately one is a former journalist or an international relations student that really understands the region and it's that team environment working together in person that really allows us to do that detection but then use more technical tools to do the attribution.
Ali Wyne: So You talked about identifying and attributing foreign influence operations and that's one matter, but how do you actually combat them? How do you combat those operations and what role, if any, can the technology industry play in combating them?
Clint Watts: So we're at a key spot, I think, in Microsoft in protecting the information environment because we do understand the technical signatures much better than any one government could probably do or should. There are lots of privacy considerations, we take it very serious at Microsoft, about maintaining customer privacy. At the same point, the role that tech can do is illustrative in some of our recent investigations, one of them where we found more than 30 websites, which were being run out of the same three IP addresses and they were all sharing content pushed from Beijing, but to local environments and local communities don't have any idea that those are actually Chinese state-sponsored websites.
So what we can do being part of the tech industry is confirm from a technical perspective that all of these things and all of this activity is linked together. I think that's particularly powerful. Also in terms of the cyber and influence convergence, we would say, we can see a leak operation where an elected official in one country is targeted as part of a foreign cyberattack for a hack and leak operation. We can see where the hack occurred. If we have good attribution on it, Russia and Iran in particular, we have very strong attribution on that and publish on it frequently, but then we can match that up with the leaks that we see coming out and where they come out from. And usually the first person to leak the information is in bed with the hacker that got the information. So that's another role that tech can play in particular about awareness of who the actors are, but what the connections are between one influence operation and one cyberattack and how that can change people's perspectives, let's say, going into an election.
Ali Wyne: Teija, I want to come back to you to ask you about a dilemma that I suspect that you and your colleagues and everyone who's operating in this space, a dilemma that I think many people are grappling with. And I want to put the question to you. I think that one of the central tensions in combating disinformation of course is preserving free speech in the nations where it exists. How should democracies approach that balancing act?
Teija Tiilikainen: This is a very good question and I think what we should keep in mind is what the malign actors are striving at is that they would like to see us starting to compromise our values, question our own values. So open society, freedom of speech, rule of law, democratic practices, and the principle of democracy, we should stick to our values, we should make sure that we are not going to the direction where the malign actors would want to steer us. But it is exactly as you formulate the question, how do we make sure that these values are not exploited against our broad societal security as is happening right now?
So of course there is not one single solution. The technological solution certainly can help us protect our society, broad awareness in society about these types of threats. Media literacies is the kind of keyword many times mentioned in this context. A totally new approach to the information space is needed and can be achieved through education, study programs, but also by supporting the quality media and the kind of media that relies on journalistic ethics. So we must make sure that our information environment is solid and that also in the future we'll have the possibility to make a distinction between disinformation and facts because it is - distinction is getting very blurred in a situation where there is a competition about narratives going on. Information has become a tool in many different conflicts that we have in the international space, but also in the domestic level many times.
I would like to offer the center's model because it's not only that we need cooperation between private actors, companies, civil society actors and governmental actors in states. We also need firm cooperation among like-minded states, sharing of best practices, learning. We can also learn from each other. If the malign actors do that, we should also take that model into use when it comes to questions such as how to counter, how to build resilience, what are the solutions we have created in our different societies? And this is why our center of excellence has been established exactly to provide a platform for that sharing of best practices and learning from each other.
So it is a very complicated environment in terms of our security and our resilience. So we need a multiple package of tools to protect ourselves, but I still want to stress the - our values and the very fact that this is what the malign actors would like to challenge and want us to challenge as well. So, let's stick to them.
Ali Wyne: Clint, let me come back to you. So we are heading into an electorally very consequential year. And perhaps I'm understating it, 2024 is going to be a huge election year, not only for the United States but also for many other countries where folks will be going to polls for the first time in the age of generative artificial intelligence. Does that fact concern you and how is artificial intelligence changing this game overall?
Clint Watts: Yeah, so I think it's too early for me to say as strange as it is. I remind our team, I didn't know what ChatGPT was a year ago, so I don't know that we know what AI will even be able to do a year from now.
Ali Wyne: Fair point, fair point.
Clint Watts: In the last two weeks, I've seen or experimented with so many different AI tools that I just don't know the impact yet. I need to think it through and watch a little bit more in terms of where things are going with it. But there are a few notes that I would say about elections and deepfakes or generative AI.
Since the invasion of Ukraine, we have seen very sophisticated fakes of both Zelensky and Putin and they haven't worked. Crowds, when they see those videos, they're pretty smart collectively about saying, "Oh, I've seen that background before. I've seen that face before. I know that person isn't where they're being staged at right now." So I think that is the importance of setting.
Public versus private I think is where we'll see harms in terms of AI. When people are alone and AI is used against them, let's say, a deepfake audio for a wire transfer, we're already seeing the damages of that, that's quite concerning. So I think from an election standpoint, you can start to look for it and what are some natural worries? Robocalls to me would be more worrisome really than a deepfake video that we tend to think about.
The other things about AI that I don't think get enough examination, at least from a media perspective, everyone thinks they'll see a politician say something. Your opening clip is example of it and it will fool audiences in a very dramatic way. But the powers of AI in terms of utility for influence operations is mostly about understanding audiences or be able to connect with an audience with a message and a messenger that is appropriate for that audience. And by that I mean creating messages that make more sense.
Part of the challenge for Russia and China in particular is always context. How do you look like an American or a European in their country? Well, you have to be able to speak the language well. That's one thing AI can help you with. Two, you have to look like the target audience to some degree. So you could make messengers now. But I think the bigger part is understanding the context and timing and making it seem appropriate. And those are all things where I think AI can be an advantage.
I would also note that here at Microsoft, my philosophy with the team is machines are good at detecting machines and people are good at detecting people. And so there are a lot of AI tools we're already using in cybersecurity, for example, with our copilots where we're using AI to detect AI and it's moving very quickly. As much as there's escalation on the AI side, there's also escalation on the defensive side. I'm just not sure that we've even seen all the tools that will be used one year from now.
Ali Wyne: Teija, let me just ask you about artificial intelligence more broadly. Do you think that it can be both a tool for combating disinformation and a weapon for promulgating disinformation? How do you view artificial intelligence broadly when it comes to the disinformation challenge?
Teija Tiilikainen: I see a lot of risks. I do see also possibilities and artificial intelligence certainly can be used as resilience tools. But the question is more about who is faster if the malign actors take the full advantage of the AI before we find the loopholes and possible vulnerabilities. I think it's very much about a hardcore question to our democracies. The day when an external actor can interfere efficiently into our democratic processes, to elections, to election campaigns, the very day when we cannot any longer be sure that what is happening in that framework is domestically driven, that they will be very dangerous for the whole democratic model, the whole functioning of our Western democracies.
And we are approaching the day and AI is, as Clint explained, one possible tool for malign actors who want to discredit not only the model, but also interfere into the democratic processes, affect outcomes of elections, topics of elections. So deepfakes and all the solutions that use AI, they are so much more efficient, so much faster, they are able to use so much - lots of data.
So I see unlimited possibilities unfortunately for the use of AI for malign purposes. So this is what we should focus on today when we focus on resilience and the resilience of our digital systems.
And this is also a highly unregulated field also at the international level. So if we think about weapons, if we think about military force, well, now we are in a situation of deep conflict, but before we were there we used to have agreements and treaties and conventions between states that regulated the use of weapons. Those agreements are no longer in a very good shape. But what do we have in the realm of cyber? This is at the international level a highly unregulated field. So there are many problems. So can only encourage and stress the need to identify the risks with these solutions. And of course we need to have regulation of AI solutions and systems in our states at the state level as well as hopefully at some point also international agreements concerning the use of AI.
Ali Wyne: I want to close by emphasizing that human component and ask you, as we look ahead and as we think about ways in which governments and private sector actors, individuals, and others in this ecosystem can be more effective at combating disinformation foreign influence operations, what kinds of societal changes need to happen to neutralize the impact of these operations? So talk to us a little bit more about the human element of this challenge and what kinds of changes need to happen at the societal level.
Teija Tiilikainen: I would say that we need a cultural change. We need to understand societal security very differently. We need to understand the risks and threats against societal security in a different way. And this is about education. This is about schools, this is about study programs at universities. This is about openness in media, about risks and threats.
But also in those countries that do not have the tradition. In the Nordic countries, here in Finland, in Scandinavia, we have a firm tradition of public-private cooperation when it comes to security policy. We are small nations and the geopolitical region has been unstable for a long time. So there is a need for public and private actors to share a same understanding of security threats and also cooperate to find common solutions. And I think I can only stress the importance of public-private cooperation in this environment.
We need more systematical forms of resilience. We have to ask ourselves what does resilience mean? Where do we start building resilience? Which are all the necessary components of resilience that we need to take into account? So that we have international elements, we have national elements, local elements, we have governmental and civil society parts, and they are all interlinked. There is no safe space anywhere. We need to kind of create comprehensive solutions that cover all possible vulnerabilities. So I would say that the security culture needs to be changed and it's not the security culture we tend to think about domestic threats and then international threats. Now they are part of the same picture. We tended to think about military, nonmilitary, also they are very much interlinked in this new technological environment. So new types of thinking, new types of culture. I would like to get back to university schools and try to engage experts to think about the components of this new culture.
Ali Wyne: Teija Tiilikainen, Director of the European Center of Excellence for Countering Hybrid Threats. Clint Watts, General Manager of Microsoft Threat Analysis Center. Teija, Clint, thank you both very much for being here.
Teija Tiilikainen: Thank you. It was a pleasure. Thank you.
Clint Watts: Thanks for having me.
Ali Wyne: And that's it for this episode of Patching the System. There are more to come. So follow Ian Bremmer's GZERO World feed anywhere you get your podcasts to hear the rest of our new season. I'm Ali Wyne. Thank you very much for listening.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
- Podcast: Cyber mercenaries and the global surveillance-for-hire market ›
- Podcast: How cyber diplomacy is protecting the world from online threats ›
- Attacked by ransomware: The hospital network brought to a standstill by cybercriminals ›
- Hacked by Pegasus spyware: The human rights lawyer trying to free a princess ›
- The threat of CEO fraud and one NGO's resilient response ›
- Why privacy is priceless - GZERO Media ›
- Would the proposed UN Cybercrime Treaty hurt more than it helps? - GZERO Media ›
- Podcast: Can governments protect us from dangerous software bugs? - GZERO Media ›
- Podcast: Cyber Mercenaries and the digital “wild west" - GZERO Media ›
Podcast: Cyber mercenaries and the global surveillance-for-hire market
Listen: The use of mercenaries is nothing new in kinetic warfare, but they are becoming a growing threat in cyberspace as well. The weapon of choice for cyber mercenaries is malicious spyware that undermines otherwise benign technologies and can be sold for profit. Luckily, awareness about this threat is also growing, and increasing global coordination efforts are being put forth to combat this dangerous trend.
In episode 2, season 2 of Patching the System, we're focusing on the international system of bringing peace and security online. In this episode, we look at what governments and private enterprises are doing to combat the growth of the cyber mercenary industry.
Our participants are:
- Eric Wenger, senior Director for Technology Policy at Cisco
- Stéphane Duguin, CEO of the CyberPeace Institute
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
TRANSCRIPT: Cyber mercenaries and the global surveillance-for-hire market
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
Eric Wenger: There's no phishing or fooling of the user into installing something on their device. This technology is so powerful that it can overcome the defenses on a device. So this is a tool that is on the level of sophistication with a military grade weapon and needs to be treated that way.
Stéphane Duguin: What we're facing is a multifaceted threat with a loose network of individuals, financiers, and companies which are playing a link in between states when it comes to a deployment of these surveillance capabilities. So if you want to curb this kind of threats, you need to act as a network.
Ali Wyne: In the ongoing war in Ukraine, both sides have employed mercenaries to supplement and fortify their own armies. Now, guns for hire are nothing new in kinetic warfare, but in cyberspace, mercenaries exist as well to augment government capabilities and their weapon of choice is malicious spyware that undermines peaceful technology, and which can be sold for profit. Today we'll enter the world of cyber mercenaries and the work that's being done to stop them.
Welcome to Patching The System, a special podcast from the Global Stage series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us. In this episode, we're looking at the latest in cyber mercenaries and what's being done to stop them. Last season we spoke to David Agranovich, director of Global Threat Disruption at Meta, about what exactly it is that cyber mercenaries do.
David Agranovich: These are private companies who are offering surveillance capabilities, which once were essentially the exclusive remit of nation state intelligence services, to any paying client. The global surveillance for hire industry, for example, targets people across the internet to collect intelligence, to try and manipulate them into revealing information about themselves and ultimately to try and compromise their devices, their accounts, steal their data.
Ali Wyne: And since then, awareness has grown and efforts to fight these groups have been fast tracked. In March of this year, the Tech Accord announced a set of principles specifically designed to curb the growth of the cyber mercenary market, which some estimate to be more than $12 billion globally. That same month, the White House issued an executive order to prohibit the U.S. government from using commercial spyware that could put national security at risk, an important piece of this cyber mercenary ecosystem.
On the other side of the Atlantic, a European Parliament committee finalized a report on the use of spyware on the continent and made recommendations for regulating it. And most recently, bipartisan legislation was introduced in the United States to prohibit assistance to foreign governments that use commercial spyware to target American citizens.
Are all of these coordinated efforts enough to stop the growth of this industry? Today I'm joined by Eric Wenger, senior Director for Technology Policy at Cisco, and Stéphane Duguin, CEO of the CyberPeace Institute. Welcome to you both.
Eric Wenger: Thank you.
Stéphane Duguin: Thank you.
Ali Wyne: Now, I mentioned this point briefly in the introduction, but I'd love to hear more from both of you about specific examples of what it is that cyber mercenaries are doing. What characterizes their work, especially from the latest threats that you've seen?
Stéphane Duguin: It's important maybe to start with a bit of definition of what are we talking about when we talk about cyber mercenaries. So interestingly, there is the official definition and what we all mean. Official definition - you can find this in the report to the general assembly of the United Nation, where it's really linked to private actors that can be engaged by states and non-state actors. It's really about the states taking action to engage someone, to contract someone in order to look into cyber operations in the context of an armed conflict.
I would argue that for this conversation, we need to look at the concept of cyber mercenaries wider and look at this as a network of individuals, of companies, of financial tools, of specific interest to at the end of the day, ensure global insecurity. Because all of this is about private sector entities providing their expertise, their time, their tool to governments to conduct clearly at scale an illegal, unethical surveillance. And to do this investment - money - needs to pour into a market, because it's a market which finances what? Global insecurity.
Eric Wenger: I would add that there's another layer to this problem that needs to be put into context, and that is, Stéphane correctly noted, that these are private sector entities and that their customers are governments that are engaged in some sort of activity that is couched in terms of protecting safety or national security. But the companies themselves are selling technology that is regulated and therefore is being licensed from a government as well too. I think that's really the fascinating dynamic here is that you have a private sector intermediary that is essentially involved in a transaction that is from one government to another government with that private sector actor in the middle being the creator of the technology, but it is subject to a license by one government for a sale to another government.
Ali Wyne: This market is obviously growing quickly, and I mentioned in my introductory remarks that $12 billion global figure, so obviously there's a lot of demand. From what you've seen, who are the customers and what's driving the growth of this industry?
Eric Wenger: Well, the concerning part of the story is that there have been a number of high profile incidents that have indicated these technologies are being used not just to protect against attacks on a nation, but in order to benefit the stability of a regime. And in that context, what you see are journalists being the subject of the use of these technologies or dissidents, human rights activists. And that's the part that really strikes me as being quite disturbing. And it is frankly the hardest part of this problem to get at because as I noted before, if you have these private sector actors that are essentially acting as intermediaries between governments, then it's hard to have a lot of visibility from the outside of this market into what are the justifications that are enabling sales. Who is this technology going to? How is it being used and how is it potentially being checked in order to address the human rights concerns that I've flagged here?
Ali Wyne: Stéphane, let me come back to you. So you used to work in law enforcement and given your law enforcement background, one question that one might ask is why shouldn't governments be taking advantage of cyber mercenaries if they are making tools that help to, for example, track down terrorists or otherwise fight crime and improve national defense? Why shouldn't governments be taking advantage of them?
Stéphane Duguin: Something that is quite magical about law enforcement, it's about enforcing the law. And in this case, there's clear infringement all over the place. Let's look into the use cases that we know about. So when it comes to law, what kind of judicial activities have been undertaken after the use, sale or export of these kinds of tools? So there's this company, Amesys, which is now sued for complicity in acts of torture, over sales of surveillance technologies to Libya. You have these cases of dissident that has been arrested in Egypt in the context of the acquisition of the Predator tool. More recently we've seen what happened in Greece with this investigation around the surveillance of critics and opponents. And you can add an add on example. This has nothing to do with law enforcement.
So my experience in law enforcement is that you have a case, when you have a case, you have an oversight, a judicial oversight. I was lucky to work in law enforcement in Europe, so a democratic construct that goes under the oversight of parliament. Where is this construct where a private sector entity has free rein to research and develop, increase, export, exactly as was said before, in between state, a technology, which by the way is creating expertise within that same company for people that are going to sell this expertise left and right. Where is the oversight? And where are the rules that would put this into a normal law enforcement system?
And just to finish on this, I worked on investigating terrorist group and cyber gangs most of my career, and we can do cases, we can do very, very, very good cases. I would not admittedly say that the problem is about putting everyone under surveillance. The problem is more about investing resources in law enforcement and in the judicial system to make sure that when there's a case, there's accountability and redress and repair for victims. And these, do not need surveillance at scale.
Ali Wyne: Eric, Let me come back to you. So, I want to give folks who are listening, I want to give them a little bit of a sense of the size of the problem and to help put the size of the problem in perspective. So when we talk about cyber mercenaries, just how big is the threat from them and the organizations for which they work? And is that threat, is it just an annoyance or is it a real cause for concern? And who's most affected by the actions that they take?
Eric Wenger: We could talk about the size of the market and who is impacted by it. That's certainly part of the equation in trying to size the threat. But we also have to have a baseline understanding of what the technology is that we're talking about in order for people to appreciate why there's so much concern. And we're talking about exploits that can be sent from the deployer or the technology to a mobile device that's used by an individual or an organization without any action being taken by the user. There's nothing you have to click, there's nothing you have to accept. There's no phishing or fooling of the user into installing something on their device. This technology is so powerful that it can overcome the defenses on a device. And then that device is then completely compromised so that cameras can be turned on, files stored on the device can be accessed, microphones can be activated.
So this is a tool that is on the level of sophistication with a military grade weapon and needs to be treated that way. So the concern is the cutout of a private sector entity in between the government, and these are typically democratic governments that are licensing these technologies to other governments that wouldn't have the capabilities to develop these technologies on their own. And then once in their hands, it's difficult if not impossible, to make sure that they are used only within the bounds of whatever the original justification for it was.
So in theory you would say, let's say there was some concern about a terrorist operation that justified the access to this technology, which in that government's hands can be repurposed for other things that might be a temptation, which would include protecting of the stability of the regime by going after those who are critics or dissidents or journalists that are writing things that they view as being unhelpful to their ability to govern. And so those lines are very difficult to maintain with a technology that is so powerful that is in the hands of a government without the type of oversight that Stéphane was referencing before.
Ali Wyne: So Stéphane, let me come back to you. And just building off of the answer, Eric just gave, what groups and individuals are most at risk from this growing cyber mercenary market?
Stéphane Duguin: History showed that who has been targeted by the deployment of these tools and the activities of the cyber mercenaries are political opponents and journalists, human rights defenders, lawyer, government official, pro-democracy activists, opposition members, human right defenders and so on. So we are quite far from terrorists or organized crime, art criminals and the like.
And interestingly, it's not only that this profile of who is targeted gives a lot of information about the whole ethics and values that are underlying in this market ecosystem. But also what is concerning is that we know about this not from law enforcement or not from public sector entities which would investigate the misuse of these technologies and blow the whistle. We know about this thanks to the amazing work of a few organizations over the past a decade, like the Citizen Lab, Amnesty Tech who could track and demonstrate the usage, for example of FinFisher against pro-democracy activists in 2012, position members in 13, FinSpy afterwards, then it moved to Pegasus firm NSO.
Now we just have the whole explanation of what happened with the Predator. It's quite concerning that these activities that are at the core of abuse of human rights and of the most essential privacy are not only happening in the shadow as Eric was mentioning before, with a total asymmetry between the almost military grades of tools that is put in place and the little capacity for the target to defend themselves. And this is uncovered not by the people we entrust with our public services and enforcement of our rights, but by investigative groups, civil society, which are almost for a living now doing global investigation against the misuse of offensive cyber capabilities.
Ali Wyne: Your organization, the CyberPeace Institute, what is the CyberPeace Institute doing to combat these actors? And more broadly, what is the role of civil society in working to address this growing challenge of cyber mercenary actors?
Stéphane Duguin: What we're facing is a multifaceted threat with a loose network of individuals, financiers, companies which are playing a link in between states when it comes to a deployment of these surveillance capabilities. So if you want to curb this kind of threats, you need to act as a network. So the role of the CyberPeace Institute among other civil society organizations is to put all together the capable and the willing so that we can look at the whole range of issues we're facing.
One part of it is the research and development and deployment of these tools. The second part is the detection of their usage. Another part is looking into the policy landscape and informed policymaking and demonstrating that some policies has been violated, export control when it comes to the management of these tools. Another part of the work is about measuring the human harm of what these tools are leading to.
So we, for example, at the CyberPeace Institute cooperated with the development of the Digital Violence Platform, which is showing the human impacts, for example, the usage of Pegasus on individual. We also are in the lead in one of the working groups of the Paris Peace Forum. We need to bring a multi-stakeholder community in a maturity level to understand exactly what this threat is costing to society and what kind of action we could take all together.
And we notably last year in the World Economic Forum, joined forces with Access Now, the official high commissioner for human rights, Human Rights Watch, Amnesty International and the International Trade Union Confederation and Consumer International, to call for a moratorium on the usage of these tools until we have the certainty that they are researched, deployed, exported, used with the proper oversight because otherwise the check and balance cannot work.
Ali Wyne: And you just mentioned Pegasus spyware and that kind of software has been getting more and more attention, including from policymakers. So Eric, let me come back to you now. What kinds of actions are governments taking to curb this market?
Eric Wenger: So as I noted before that this is an interesting combination of technology, of private sector entities that are creating the technology, the regulators who are in the governments where those companies are located who control the sale of the technology, and then the technology consumers who are, again, as Stéphane noted, other governments. And so it's this interesting blend of private and public sector actors that's going to require some sort of coordinated approach that runs across both. And I think you're seeing action in both of those spheres. In terms of private sector companies, Cisco, my employer, joined together with a number of other companies filing a friend of the court or amicus brief in litigation that had been brought by what was then Facebook, now Meta, against a company that was deploying technology that had hacked into their WhatsApp software. And in that case we joined together with a number of other companies, I believe it was Microsoft and Dell and Apple and others who joined together in filing a brief in that case.
We of course come together under the umbrella of the Tech Accord and we can talk about the principles that we developed among the companies. I think there's 150 companies that joined ultimately in signing that document in agreement that we have concerns that there are things we want to do in a concerted way to try to get at this market so that it doesn't cause the kinds of impacts that Stéphane talked about before.
Again, there's clearly a strong government to government piece of this that needs to be taken on. And then Stéphane also noted the Paris Peace Forum, and that this topic of how to deal with spyware and cyber mercenaries is going to be on the agenda there, which again is important because this is a government led forum, but it's one where you also see private sector and civil society entities actively engaged. Stéphane also mentioned the important work that's being done by Citizen Lab. And then we have threat intelligence researchers at Cisco that operate under the brand of Talos.
These are some of the most effective threat intelligence researchers in the world, and they're really interested in this problem as well too, and starting to work with people who suspect that their devices may have been compromised in this way to take a look at them and to help them.
And then the companies that make the cell phones and operating systems, Google and Apple for instance, have been doing important work about detecting these kinds of changes to the devices and then providing notice to those whose devices may have been impacted in these ways so that they are aware and are able to try to take further defensive measures. It's really quite an active space and as we've discussed here several times, it's one that will only be really effectively taken on through a concerted effort that runs across the government and private sector space. And again, also with civil society as well too.
Ali Wyne: Talk to us a little bit about what technology companies can do to shut down this market?
Eric Wenger: Yeah, it was natural that this would grow out of the Tech Accord, which itself was a commitment by companies to protect their customers against attacks that misuse technology that are coming from the government space. There was a recognition among our companies that yes, some of this is clearly most effectively addressed at that government to government level with awareness that's being created by civil society. But this is also a problem that relates to the creation of technology and the companies that are engaged in these business models are procuring and using technology that could be coming from companies that find this business model to be highly problematic.
And so that's essentially what we did is we sat down as a group and started to talk about what is the part of the problem that technology and the access to technology potentially contributes that we have some ability to make a difference on. And then agreeing amongst ourselves that the steps that we might be able to take to limit the proliferation of this technology and the market and the companies that are engaging in this type of business. And then that coming together with the work that's being done at the government to government level, hopefully will make a significant dent in the size of this market.
Ali Wyne: Stéphane, let me come back to you as promised. Whether it's governments, whether it's technology companies, what kinds of actions can these actors take to shut down this cyber mercenary market?
Stéphane Duguin: Eric listed a lot of what is happening in this space and it's very exhaustive and it tells you how complex the answer is. We try to put this into a framework that what is expected from states is regulation first. So regulation meaning having the regulation but implementing the regulation. And under the word regulation, I would even put the norm discussion where there's non-binding norms that have been agreed between states and some of them could be leveraged and operationalized in order to prevent such a proliferation because that's what we're talking about.
Another type of regulation that could be way better implemented is the expert control. For example, in the European Union, we at CyberPeace Institute were discussing this in the context of the PEGA Committee, so this work from the EU parliament when it comes to looking into the lawfulness and ethic use of these kinds of tools.
But also when we add this multi-stakeholder approach for the EU Cyber Agora to discuss the problematic and clearly the expert control needs to be put at another level of operationalization, so regulation. Then need to mean capacity to litigate. So to give the space and the means to your apparatus that is in the business of litigation.
So today, what do we have? For example, executive from Amesys and Nexa Technologies that were indicted for complicity in torture; NSO group which is facing multiple lawsuits by mostly civil society and corporate plaintiffs in various countries, but that's clearly not enough.
So this should be not only coming from civil society, journalists, plaintiff, but we should see some investigative capacity from states, meaning law enforcement, looking into this kind of misuse. The other part is attribution, like public attribution on what is happening. So who are the actors, what are these companies, how this network are working?
So we can see over time how the regulation, the litigation is having an impact on the ecosystem. Otherwise, it's like emptying the ocean with a spoon. So I guess you know the great work done by the community, so we mentioning it before the Citizen Lab, the Amnesty Tech, Access Now, the work of tons of other organizations, I don't want to forget anyone, is not going to scale to a level if policy makers do not do their job, which is what is policymaking in the criminal context? It is reducing the space that you give to criminals. And today in this context for cyber mercenaries, the space is way too big. So I would say around this regulation, litigation and public attribution, it's kind of a roadmap for government.
Ali Wyne: Eric, let me come back to you. And you already mentioned in one of your earlier answers, you talked about these principles that the Tech Accord came out with recently, just a few months ago, in fact, to oppose a cyber mercenary industry. And talk to us a little bit more about what exactly those principles entail and what their intended impact is.
Eric Wenger: Sure. Stéphane also makes an important point around the context of what governments can do. Things like putting companies that are of concern on the entity list to restrict their ability to license technology that they might need in order to build the tools that they are selling. But coming back to where companies like those who joined the Tech Accord can make a difference. I noted that these principles build on the cybersecurity Tech Accord's, founding commitments which are about building strong defense into our products, not enabling offensive use of our technologies, capacity building, in other words, helping the ability of governments to do the work that they need to protect their citizens and working together across these different domains with the private sector, the civil society and governments. These particular principles are aimed at this specific problem. And the idea is that we will collectively try to work together to take steps countering the use of the products that will harm people, and we can identify ways that we can actively counter the market.
One of the ways that we mentioned before is the participation in litigation where that's the appropriate step. We're also investing in cybersecurity awareness to customers so that they have more understanding of this problem. There are tools that are being built by the companies that are developing the operating systems on mobile devices that can, if you're in a highly vulnerable group like you're a journalist or a human rights dissident or a lawyer working in an oppressive legal environment, there are more defensive modes that some of these phones now enable. And then we're working to, and this is an example of our companies working together and on our own to protect customers and users by building up the security capabilities of our devices and products.
And then finally, we thought, Stéphane mentioned his role in law enforcement before, I also was a computer crime prosecutor at the Department of Justice. And it's really important for those who are conducting legitimate lawful investigations to have clear understandings of the processes that are used by companies to handle valid legal requests for information. And so that we built that into this set of principles as well too, that we're committed to where there are legal and lawful pathways to get information from a company's lawful intercept, compulsory access tools and things like that, that we are transparent about how we operate in those spaces and we clearly communicate what our processes for handling those kinds of demands from governments as well too.
Ali Wyne: Final question for both of you. What is the single most important step that societies can take to stop the work of cyber mercenaries?
Stéphane Duguin: Eric opened it very, very well in the sense of what we see as the ambition and the partnership, the activities are deployed both by civil society, by cooperation, Tech Accord is an excellent example, in order to curb these threats. And interestingly, maybe it also came from the fact that there was not so much push on the government side to do something at scale against that threat. So clearly today, who represents society and the need for society in this context with pushing the ball, is civil society, cooperation, academia. And I would say now government are starting to get the size of the problem. Something that Eric mentioned, I would like to build on it because it's about society, what the values that we believe in society, there's a need for law enforcement and a lot of law enforcement and judiciary, they want to work in a lawful way. That's the vast majority, at least from the law enforcement that I can relate to when it comes to Europe, where I worked.
In this context, it's quite important that the framework is clear, the capacity are there, the resource are there, so that it doesn't give so much of a space for these cyber missionaries to impose themselves as the go-to platform, the place where solution can be engineered because there's nothing else out there. Something else, a society has to make a choice. Do we want to have such a market in proliferation without today, any check and balance, any oversight and it's just like the wild west of the surveillance? Or do we say stop at minimum to make a moratorium, to put in place some clear oversight processes, looking into what makes sense and what we can accept as a society before letting this go. And the last thing is to invest at best with the regulation that we're having, that we're going to have. This regulation, for example, now that under negotiation in the EU, like the AI Act or the Cyber Resilience Act or Cyber Solidarity Act, it would not take much to have this regulation also looking into not only what makes system insecure, but also who is trying to make system insecure.
Ali Wyne: Eric, let me come to you to close us out and put the same question to you. What is the single most important step that societies can take to stop the work of cyber mercenaries?
Eric Wenger: Well, I'd love to say it was one thing, but it really is going to be a combination of things that come together as one maybe. And that's really going to involve this dynamic where the governments that are regulating access to the market of this technology, the governments that are... It may not be reasonable to expect that the governments that want to consume this technology will come to the table, but certainly the governments that have control over the markets where the technology is being developed, working together. And so as Stéphane mentioned, the United States government, the French government, the UK government have really all been out in front on this.
Those governments and others that share the concerns coming together with the experts in the threat intelligence space in academia, in civil society, in companies, and then companies that supply technologies that are critical, foundational elements of the ability of companies who are developing these technologies to engage in the market, also have an important role to play. And I think that's what we're bringing to the equation for the first time.
So it's this combination of actors that are coming together, recognizing that it's a problem and agreeing that there's something that we all need to do together in order to take this on. It's really the only way that we can be effective at addressing the concerns that we've been discussing here today.
Ali Wyne: Eric Wenger, Senior Director for Technology Policy at Cisco. Stéphane Duguin, CEO of the CyberPeace Institute. Thank you both so much for speaking with me today.
Eric Wenger: Thank you for having us.
Ali Wyne: And that's it for this episode of Patching the System. There are more to come. So follow Ian Bremmer's GZERO World feed anywhere you get your podcast to hear the rest of this new season. I'm Ali Wyne. Thank you very much for listening.'
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
- Podcast: How cyber diplomacy is protecting the world from online threats ›
- Podcast: Cyber Mercenaries and the digital “wild west" ›
- Attacked by ransomware: The hospital network brought to a standstill by cybercriminals ›
- Hacked by Pegasus spyware: The human rights lawyer trying to free a princess ›
- The threat of CEO fraud and one NGO's resilient response ›
- Podcast: Foreign influence, cyberspace, and geopolitics - GZERO Media ›
- Why privacy is priceless - GZERO Media ›
- Would the proposed UN Cybercrime Treaty hurt more than it helps? - GZERO Media ›
- Podcast: Can governments protect us from dangerous software bugs? - GZERO Media ›
Podcast: How cyber diplomacy is protecting the world from online threats
Listen: Just as bank robbers have moved from physical banks to the online world, those fighting crime are also increasingly engaged in the digital realm. Enter the world of the cyber diplomat, a growing force in international relations specifically focused on creating a more just and safe cyberspace.
In season 2 of Patching the System, we're focusing on the international systems and organizations of bringing peace and security online. In this episode, we're discussing the role of cyber diplomats, the threats they are combatting, and how they work with public and private sectors to accomplish their goals.
Our participants are:
- Benedikt Wechsler, Switzerland's Ambassador for Digitization
- Kaja Ciglic, Senior Director of Digital Diplomacy at Microsoft.
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “ Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
TRANSCRIPT: How cyber diplomacy is protecting the world from online threats
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
BENEDIKT WECHSLER: We have to be aware that although we are so familiar with the cyber and digital world, it's still a new technology. And I think we don't have that much time to develop these organizations and rules as we had for the maritime or for the airspace.
KAJA CIGLIC: This situation is both terrifying and sort of deteriorating, I would say, at the same time. Part of the reason is because the technology's evolving so fast. Every time there is a new tool put on the market, it can, and someone will try and test it as a weapon.
ALI WYNE: It is hard to overstate just how much we rely on digital technology and connectivity in our daily lives, from the delivery of essential services, including drinking water and electricity, to how we work, pay our bills, get our news. Increasingly, it all depends on an ever-growing cyberspace. But as humanity's digital footprint grows, cyberspace is also growing as a domain of conflict where attacks have the potential to bring down a power grid, where rattle the stock market, or compromise the data and security of millions of people in just moments.
Got your attention? Well, good.
Welcome to the second season of Patching the System, a special podcast from the Global Stage Series, a partnership between GZRO Media and Microsoft. I'm Ali Wyne, a senior analyst with Eurasia Group.
Now, whether you're a policy expert, you're a curious coder, or you're just a listener wondering if your toaster is plotting global domination, this podcast is for you. Throughout this series, we're highlighting the work of the Cybersecurity Technology Accord, a public commitment for more than 150 global technology companies dedicated to creating a safer cyber world for all of us.
Last season, we tackled some of the more practical aspects of cybersecurity, including protecting the Internet of Things and combating hackings and ransomware attacks. This time around, we're going global, talking about peace and security online, talking about how the international system is trying to bring stability and make sense of this new domain of conflict.
Digital transformation is happening at unprecedented speeds: AI, anyone? And policy and regulation need to evolve to keep up with that reality. Meanwhile, there has been widespread use of cyber operations in Russia's invasion of Ukraine, the first large-scale example of hybrid warfare. Well, what are the rules?
Enter the cyber diplomat. An increasing number of nations have them: ambassadors who are assigned not to a country or a region, but instead, assigned to addressing a range of issues online that require international cooperation. Many of these officials are based in Silicon Valley, and the European Union just recently opened a digital diplomacy office in San Francisco.
Meanwhile, the United States named its first Cyber Ambassador, Nathaniel Fick, to the State Department just last year. Here he is at a Council on Foreign Relations event describing the work of his office”
NATHANIEL FICK: Part of the goal here was to bring in not only one person, but a group of people with other perspectives, outside perspectives, in order to build something new inside the department. It's as close to a startup as you're going to get in a large bureaucracy like the Department of State. I think one of our goals again is to is to really restore public private partnership to a substantive term.
ALI WYNE: What do cyber diplomats do? Why do we need them, and how do they interact with private sector companies around the world?
I'm talking about this subject with Benedikt Wechsler, Switzerland's Ambassador for Digitization. And Kaja Ciglic, Senior Director of Digital Diplomacy at Microsoft. Welcome to you both.
BENEDIKT WECHSLER: Pleasure to be here.
KAJA CIGLIC: Thank you for having us.
ALI WYNE: Ambassador, let me begin with you. What does an Ambassador for Digitization do, and why did Switzerland feel that it was necessary as a diplomatic position?
BENEDIKT WECHSLER: Diplomats – or diplomacy - is one of the oldest professions in the world actually. And when our new minister came in four years ago, he wanted to know what is the world going to look like in about eight, 10 years. And there came up a foreign policy vision, which of course stated also that the digital world, the cyberspace will be ever more important. And so an ambassador is sent abroad to promote and protect interests of its citizens, companies, but also promote an international order, which is conducive to serving the best interests to not only his own country but the whole world.
But we didn't have a structure who deals with the digital world because there are new partners, there are new power centers, there's new actors, but the same interests and values were at stake. So we decided to set up a division for digitalization, which means exactly to promote interests for the citizens, Swiss companies, values, and human rights as well in that new field and develop that with new partners. So that is our key mission.
ALI WYNE: When most other folks hear the word ambassador, we are thinking about an ambassador to country X. I think it's really exciting that we now have an ambassadorial position for this critical priority. So, run us through some of the most pressing problems that you're tackling in this new role for a very new remit.
BENEDIKT WECHSLER: Normally, we always have a little tendency to fix our ideas on problems and security and risks. So that, of course, is the underlying most important issue. This digital space, cyberspace, has to be a safe space, otherwise, people don't want to engage in such a space. That's also a change from the sort of physical world to the digital world. I just recently read a report that in, Denmark, there has been no bank robbery anymore because there's no more banks where you can get money out. But, of course, they moved to cyberspace.
ALI WYNE: Right. Right.
BENEDIKT WECHSLER: Another little parallel what we're trying to do is - back to the age of their railways, there was a problem that train coaches were moving from one country to another, but they didn't have a means to lock or unlock these train coaches. So a group of experts sat together in Bern and devised a special key, which is still in function today, which was able to lock and unlock these wagons and make train connections safer. So that is exactly what we now have to do in the cyberspace: to find these key of Bern or the key of Geneva or the keys of wherever to make the digital cyberspace safe and workable.
ALI WYNE: Kaja, I want to come to you next, and just building off of the ambassador's remarks. You've said previously that cyber diplomacy is different from other traditional forms of diplomacy because it's multi-stakeholder. What do you mean by multi-stakeholder in this particular context and explain why Microsoft has what it calls a digital diplomacy team?
KAJA CIGLIC: So if you think about our origins of the internet, the internet has always been, from the onset, governed and set up by groups that are not always government. It includes governments, but includes academia, includes the private sector, includes various representatives of the civil society. And as the internet grew and became an ever-present part of our lives, it has meant that its governance structures grew with it.
Of course, governments, states, are there to determine what the regulations are. But because it is global, because it's a little bit like the ambassador was saying about the railways. We need to find ways for the trains not to just safely unlock and unload but to go from one country to another on the same tracks is also something that had to be figured out. I think that is where we are in the online space at the moment.
And currently, the vast majority of it is run and operated by the private sector. And that's why we say it has to be a different conversation that includes all these other stakeholders - hence the word multi-stakeholders - not just governments, which is more where traditional diplomatic conversations have been.
And Microsoft has identified this area as an area of interest and an area of a priority, almost 10 years ago now, when we first started talking about, "Okay, we need clear rules, particularly for safety and security online." Because as all aspects of our life are moving online, we need to make sure that they can continue operating more or less unthreatened.
ALI WYNE: And we talked a lot about this subject last season. We're going to be talking a lot again about this subject this season. Tell us what the Cybersecurity Tech Accord is and tell us how it fits into this conversation?
KAJA CIGLIC: Yeah, the Cybersecurity Tech Accord is a group of companies that effectively came together in 2018. At that point, at the onset, it was just above 30, and now it's just over 150 companies from all sizes from everywhere around the world that all agree that we need to be having this conversation, and we need to be making advances on how to secure peace and stability online not just now but for future generations.
And so the group came together around four fundamental principles that all the companies are committed to strong defense of their customers. All companies are committed to not conducting offensive operations in cyberspace. All companies are committed to capacity building. So sharing knowledge and understanding and that all companies are committed to working with each other and also with others in the community, not just the private sector companies. But like I said earlier, civil society groups, academia, governments to try and advance these values and goals.
ALI WYNE: Ambassador, I feel like, every week now, we learn about some state-sponsored hacking or cyber-attacks. You think about air, land, sea - we at least have some clear rules for state behavior based on principles such as recognized borders, sovereign airspace, international waters. Do we have similar international expectations and/or obligations to be respected in cyberspace?
BENEDIKT WECHSLER: Sometimes, I think it's really, we have to be aware that although we are so familiar with the cyber and digital world, it's still a new technology. And when you look at air to sea, this has been decades. And also, there, the governance and all the rules and norms have evolved over time, over decades and years. And I think we don't have that much time to develop these organizations and rules as we had for the maritime or for the airspace. But I think we have to - as Kaja said, it's a new specific and especially multi-stakeholder world and space, and we have to take that into account.
So we cannot just set up a new organization like we did in the old days, and then we think, "Well, we'll sit together among states, and we will negotiate something, and then we'll have a good glass of wine, and then we hope that everybody is going to abide by these rules." No, I think we have to have a whole toolbox or maybe a Swiss army knife with all sort of adapted tools for the difference today. So for instance, we negotiated a classical cybercrime convention in Vienna. There's a process of the Open-Ended Working Group at the UN about responsible behavior in cyberspace where all these norms are being developed and where we have also civil society and companies, the private sector being involved.
Then we have dialogues, for instance, the Sino-European Cyber Dialogue with China and the European states. We have it also with the United States. And there we are sort of defining, "Okay, international humanitarian law, what is protection of basic infrastructure?" So we're getting there. And I think also very importantly is that we engage very closely with the private sector because there's the knowledge, there's the innovation, so that we can really develop smart rules.
And then, lastly, I think we have to embrace much more also, the scientific world, because in science, there's so much progress and innovation and foresight that we have to take into account because this is going to happen much faster that this will become a reality. We cannot see where AI is going without also involving the scientific world.
ALI WYNE: This is the second season of Patching the System, and it's amazing. When I look back on the episodes that we recorded last year, it's extraordinary how much science has progressed, how much technology has progressed. And I suspect that the rate of that scientific and technological innovation, it's only going to grow with each year, but just an observation to say how rapidly these scientific and technological domains are…
BENEDIKT WECHSLER: You're right, and I think everybody was surprised. It's amazing.
ALI WYNE: I posed this question to Kaja earlier that we think about cyber diplomacy as being different from what we might call more “traditional” forms of diplomacy. But, in traditional diplomacy, ambassador, we often think of governments in aligned groups. So we think, for example, of the NATO alliance for security or we think of countries that support free markets and free speech versus those that support more state control. Are there similar alignments when it comes to matters of cyber diplomacy?
BENEDIKT WECHSLER: Yes, definitely. I mean, that is no secret. I think you have like-minded countries also in the tech and the digital space because we want to see technology being an enabler for better reaching sustainable development goals, expanding freedom, strengthening human rights, and not undermining human rights. And of course, there are countries in the world who have a contrary view and position.
On the other hand, I think it's interesting to see in mankind and international relations, there have always been antagonistic situations. But still, there was always some agreement consensus on certain things that, as humanity, we have to stick together. And even, I mean, in the coldest times of the Cold War, there was collaboration in space between the U.S. and the Soviet Union.
And I think we also feel a little bit that with this digital world, the internet, nobody has really cut itself off this world because they know it's just too important. And we have to build on this common heritage and common base that other countries are pursuing other ways and using this technology. There are things in warfare that we decided we shouldn't do, and we should keep and stick to this and maybe develop it where needed, but especially keeping the commitments also in the online cyber world.
ALI WYNE: Kaja I want to bring you back into the conversation. So from Microsoft's point of view, what is your overall sense of the trend line when it comes to nation-state activity online? Are we moving more towards order? Are we moving more towards chaos? And what can industry, including companies such as Microsoft, what can industry do to support the kind of diplomacy that the ambassador has described to advance what you might call a rules-based international order as it were online?
KAJA CIGLIC: I would probably say it's a bit of both. This situation is both terrifying and sort of deteriorating, I would say, at the same time. Part of the reason is because the technology's evolving so fast.
ALI WYNE: Right.
KAJA CIGLIC: And so, as a result, it means that every time there is a new tool put on the market, so to say, it can, and someone will try and test it as a weapon. So I think that's the reality of human nature. And we are seeing that the deteriorating situation is also reflecting what's going on in the offline world, right. I think, at the moment, in terms of geopolitics, we're not in the best place that we have ever been, and that's reflected online. At the same time, I would say not everything is super bleak. As the ambassador was saying, we do have rules. We have international law. We have human rights commitments. We have International Humanitarian Law, and while we do see these being breached, they're not being breached by the vast majority of countries.
They're being breached by a very small minority of countries. And I think, increasingly, we're seeing states that believe in the values of international law, that believe in these commitments that have been made in other domains over the past hundreds of years as important to reinforce and support in the online world. And as a result, they’re calling out bad behavior, they're calling out breaches of international law, and that's a very positive development.
But that doesn't mean that we should be complacent, right. I think, increasingly, states are seeing cyber as the conflict. Increasingly, we're seeing the private sector developing tools and weapons that are being used for offensive purposes.
Cyber mercenaries are effectively a new market that has emerged over the past five or so years and is booming because there is such an appetite for those type of technologies by governments. And I think to your last question in terms of how the industry can help and support - some of it is just we can share what we see. The big companies, in particular, we are often the vector through which the other governments or the targets get attacked. They use Microsoft systems. They operate on a cloud platform. So we see both the actors, we see the trends, and we see the emerging new techniques. And I think that's important for sort of the foreign policy experts around the world to be aware of, understand, and be able to act upon.
ALI WYNE: Ambassador, I want to come back to you. How do you think that the tech sector, in particular, should engage with the work of cyber diplomats such as yourself?
BENEDIKT WECHSLER: I think one important aspect is that we are dealing here with an infrastructure issue. It's not just a tool or a product. It's an infrastructure and a vital infrastructure. And I think that also implies then how the tech companies should and could be part of that. So I think they should build this infrastructure together. And we, at the same time, can learn a lot from the tech companies. Also, internally, for instance, before I took up this position, I never heard of the expression red teaming. But I mean, that's a whole way of working and making products safe, like how you check a car before you put it on the market.
And I think if we work together and adapt these red teaming processes so that we also involve human rights aspects and other safety aspects so that the products that really will come to the market are already in a state developed that they are not being able to use for some malign purposes. And I think we also have to think of new forms of governance where the tech sector is really a responsible constituting part of a governance and not just looking at an issue in terms of maybe a lobby perspective or how can we influence regulation in that or that sense, but to really build the whole house together.
ALI WYNE: Kaja, I think that we often talk about cyber operations in peacetime, and it's an entirely separate matter, different matter when we're talking about cyber operations being used in armed conflict, and Microsoft has been doing a lot of work reporting on Russia's use of cyber attacks in Ukraine. What has it looked like to integrate cyber operations in war - I think really, for the first time - what has the impact been?
KAJA CIGLIC: I think, definitely, for the first time at this scale where we're talking about use of cyber in a conflict, the impact has, in effect, been tremendous. As we look at even just before the war began, the Russians have effectively either prepositioned for espionage purposes or began doing destructive operations in Ukraine that supported their military goals. Over the past year and a half now, we've seen a level of coordination between attacks that are conducted online, so cyber attacks, including on civilian infrastructure, not just as part of the military operations and attacks that were then conducted by traditional military means - so effectively bombs.
And so we've seen definitely a level of similar targets attacked in a similar time period in a specific part of the country. So the alignment between cyber and kinetic operations in war has been, to that extent, something we've never, I think, seen not just Microsoft, but I think in general. The other thing to think about and consider is frequently the Russians have used foreign influence operations, so disinformation, as part of their war effort, often time in connection with cyber operations as well.
This is a tool, a technique that the Russians have used in the past as well. If you look at sort of their traditional foreign influence operation, just not online, in the 70s and the 80s, and that has transposed over to the hack and leak world, and they've used it to both weaken the Ukrainian resolve as well as to undermine their support abroad, particularly, in Europe but elsewhere as well.
And the only reason I would say neither of those have been nearly as successful as perhaps has been expected is the unexpected but wonderful ability for both Western governments and the private sector sort of across the board, irrespective of companies being competitors or anything like that, to come to Ukrainians’ defense.
We think that a lot of the attacks have been blunted also because the Ukrainian government very quickly, at the beginning of the war, decided to migrate a lot of the government data to the cloud, again, with Microsoft but also with competitors, and were thus able to effectively protect and continue operating the government normally, but from abroad.
ALI WYNE: So cyber offenses and cyber defenses it seems are increasing in parallel. So we have this kind of tit-for-tat game. Ambassador, considering this example of hybrid warfare in Ukraine, what are the lessons of the diplomatic community moving forward? And assuming that future armed conflicts will also have similar cyber elements to them, how should the international system prepare?
BENEDIKT WECHSLER: I mean, there's a component of probably the classic disarmament processes. Normally, I think you can expect every sort of nation or state would like to have a position of superiority just to feel safe and to be smarter and more capable than the others so that an attacker wouldn't dare to attack them.
But we arrived in the nuclear arms race to a stage where we had to say this is MAD - it's Mutually Assured Destruction – and although we still have an edge probably, I think, in the cyber world, we can almost, if they really want, we can sort of kill ourselves mutually. So that understanding comes to leads you to a point where probably also states will accept, "Okay, but let's not kill each other totally."
ALI WYNE: That's a good starting point.
BENEDIKT WECHSLER: Yeah.
KAJA CIGLIC: That would be good, yeah.
BENEDIKT WECHSLER: So we'll have to ban some things. Okay. Maybe some thing we just have to sit together, "Well, we should outright ban this." And then we have other things where we cannot ban it, but we have to reduce the negative impact on civilians, on critical infrastructure, on vulnerable persons, and so forth. And so then we come into the story of the International Committee of the Red Cross, the Geneva Conventions. If we can't ban or eliminate war, let's see, at least that we can make it as least impactful for everybody. And of course, now we are coming into totally new terrains with AI as well, the autonomous lethal weapons systems with the drones.
And also, I mean, when you think of the satellites issue, which when you think of a company like Starlink who can more or less decide, "Well, now we can't give you coverage anymore,” so then basically your operation will stop because you don't have the infrastructure anymore to launch an attack. But I think it's something that we have to tackle in the logic of the disarmament on banning or mitigating or limiting effects, but also on very specific items. So we had the Landmines Convention. We had the issue of certain ammunitions that we wanted to be banned. So I think it's going to be very hard, thorny work of diplomats to try to limit this to a maximum possible extent.
ALI WYNE: Even beyond the weaponization of cyberspace, I mean just technology itself is constantly evolving. I mean, just this year alone, we've seen a real explosion in generative AI. As a result, a rush from both governments and the private sector to find a framework, some kind of regulatory framework. How do you view this new factor in terms of cyber diplomacy? How does this new factor affect the work that you're doing on a daily basis?
BENEDIKT WECHSLER: Well, I heard from somebody saying, "AI is the new digital." And, of course, we are also trying to see how can we develop tools based on AI to make diplomacy more efficient, also to make it a tool to provide more consensus on issues because you can probably gather more information, more data to show, "Well, we have a common interest here." And we launched a project. We call it the Swiss Call on Trust and Transparency in AI, where we are not looking into new regulations but rather what kind of formats of collaboration, what kind of platforms that we need to build up in order to get more trust and transparency.
And that builds a lot also on what has been done in the area of cybersecurity, on actions against ransomware. And again, what also Kaja said, it's about that the companies and diplomats or the governments are working together and sharing expertise because it's not a question of competition between private sector, but again, because building an infrastructure, a building that has to be solid and then within that infrastructure we can compete again.
ALI WYNE: Kaja, I want to bring you back into the conversation, and let's just zero in particular on the implications of artificial intelligence for security. How does Microsoft think that AI will play into concerns around escalating cyber conflict? And will AI, I mean, effectively just pour gasoline on the fire?
KAJA CIGLIC: I really don't think so. I think we actually have great opportunity to gain a little bit of an asymmetric advantage as defenders in this space. The reason is, while obviously malicious actors will abuse AI and probably are abusing AI today, we are using AI already, and we'll continue to use it to defend.
In Ukraine, we're using AI to help our defenders on cybersecurity. Microsoft gets 65 trillion - I think, it's some absurd number - signals on Microsoft platforms daily of what's going on online. Obviously, humans can't go through all of that to identify anomalies and neutralize threats. Technology can and technology is, right. So the AIs understand what's wrong - and this has happened already, but this will improve it even further - are looking at, "Oh, okay, this malware attack looks similar to something that has happened before. So I will preemptively stop it independently” right? I think that will actually help us in terms of cybersecurity.
ALI WYNE: Tell us one concrete step that you would like to see taken to get us somewhat closer to a sustainable diplomatic framework for cyber. Kaja, let's start with you, and then we'll bring the ambassador to close this out.
KAJA CIGLIC: I think it'd be really important for the UN to recognize this as a real issue. I think there is a bunch of working groups. We've seen the Secretary General make statements about and call on states to the war. But a permanent body effectively within the United Nations that would discuss some of these issues would be very welcome. At the moment, there's a lot of working groups or group of governmental experts that kind of get renewed for every five or so years, and there's not a dedicated effort necessarily focused on some of these issues.
And then, of course, we would love to find a way for the industry, but the multi-stakeholder community writ large to be able to participate and share their insights and knowledge in this area. Like I was saying at the beginning, there are opportunities. I would say Microsoft and many other private sector groups get blocked a fair amount by certain states. And I get it's a political decision at some level, but it's something that we'd really, really like to see institutionalized - both a process and the multi-stakeholder inclusion.
BENEDIKT WECHSLER: I see sort of a historic window of opportunity opening up with the works on the Global Digital Compact. We have the Summit For The Future next year, the Common Agenda. So a little bit like with the SDGs that we as the world community are coming together and say, "Okay, this is really too important. We are all in this together." Maybe also movies like Oppenheimer are reminding us of some things in the past.
And I'd like to close with Albert Einstein, who said in the 30s of last century that, "Technology advances could have made human life carefree and happy if the development of the organizing power of men, back then and women, I would say today, had been able to keep step with its technical advances. Instead, the hardly bought achievements of the machine age in the hands of our generation are as dangerous as a razor in the hands of a three-year-old child." So I hope that we see this urgency, but also this huge opportunity that we had already once as a humanity, but that we don't fully grasp it this time.
ALI WYNE: KAJA CIGLIC, Senior Director of Digital Diplomacy at Microsoft, Ambassador BENEDIKT WECHSLER, Switzerland's Ambassador for Digitization. Thank you so much for taking the time to speak with me. Thank you so much for taking the time to enlighten our audience. It's been a real pleasure.
KAJA CIGLIC: Thank you. This was a great conversation.
BENEDIKT WECHSLER: Thank you. It was a privilege to be with you.
ALI WYNE: And that's it for this episode of Patching the System. There are more to come. So follow Ian Bremmer's GZERO World feed anywhere you get your podcast to hear the rest of this new season. I'm Ali Wyne. Thanks very much for listening.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
- Podcast: Cyber Mercenaries and the digital “wild west" ›
- Attacked by ransomware: The hospital network brought to a standstill by cybercriminals ›
- Hacked by Pegasus spyware: The human rights lawyer trying to free a princess ›
- The threat of CEO fraud and one NGO's resilient response ›
- Podcast: Cyber mercenaries and the global surveillance-for-hire market - GZERO Media ›
- Podcast: Foreign influence, cyberspace, and geopolitics - GZERO Media ›
- Podcast: Would the proposed UN Cybercrime Treaty hurt more than it helps? - GZERO Media ›
- Podcast: Can governments protect us from dangerous software bugs? - GZERO Media ›
Hackers, innovation, malice & cybercrime
In the 1950s, "phreakers" whistled their ways into free long-distance calls. Steve Wozniak then improved on the scam, making enough cash to get Apple started along with Steve Jobs.
Many of today's hackers are also bored kids trying to beat the system and make a quick buck in the process. But they can also do more sinister things, Ian Bremmer tells GZERO World.
The annual global cost of cybercrime has almost tripled since 2005. If it were an economy, cybercrime would be the world's third-largest after the US and China.
We saw the impact with the 2021 ransomware attack on the Colonial Pipeline, enabled by a single compromised password. Indeed, hackers only need a tiny opening to bring down a company or a country. And they know that in Beijing, Moscow, Pyongyang, and Tehran.
So, what can we do about it?
Watch the GZERO World episode: Hackers, Russia, China: cyber battles & how we win
How private businesses help fight cybercrime
The federal government wants to help US businesses better defend themselves against cyberattacks — but little can be done if corporations don't report them.
That's why the Biden administration is championing a new law that forces them to do so, says Jen Easterly, head of the Cybersecurity and Infrastructure Security Agency.
The Cyber Incident Reporting for Critical Infrastructure Act requires whoever operates critical infrastructure to report attacks coming from state and non-state actors.
And that data will "drive down risk in a much more systematic way," Easterly tells Ian Bremmer on GZERO World.
Watch the GZERO World episode: Hackers, Russia, China: cyber battles & how we win
- A (global) solution for cybercrime - GZERO Media ›
- Biggest cybersecurity threat to watch in 2022 - GZERO Media ›
- Will the US be able to withstand cyber attacks on critical ... ›
- SolarWinds hack a wake-up call to the tech sector - GZERO Media ›
- Does Jeh Johnson consider Russia's cyber attack against the US to ... ›