Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Podcast: Foreign influence, cyberspace, and geopolitics
Listen: Thanks to advancing technology like artificial intelligence and deep fakes, governments can increasingly use the online world to spread misinformation and influence foreign citizens and governments - as well as citizens at home. At the same time, governments and private companies are working hard to detect these campaigns and protect against them while upholding ideals like free speech and privacy.
In season 2, episode 3 of Patching the System, we're focusing on the international system of bringing peace and security online. In this episode, we look at the world of foreign influence operations and how policymakers are adapting.
Our participants are:
- Teija Tiilikainen, Director of the European Center of Excellence for Countering Hybrid Threats
- Clint Watts, General Manager of the Microsoft Threat Analysis Center
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
TRANSCRIPT: Foreign Influence, Cyberspace, and Geopolitics
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
Teija Tiilikainen: What the malign actors are striving at is that they would like to see us starting to compromise our values, so question our own values. We should make sure that we are not going to the direction where the malign actors would want to steer us.
Clint Watts: From a technical perspective, influence operations are detected by people. Our work of detecting malign influence operations is really about a human problem powered by technology.
Ali Wyne: When people first heard this clip that circulated on social media, more than a few were confused, even shocked.
AI VIDEO: People might be surprised to hear me say this, but I actually like Ron DeSantis a lot. Yeah, I know. I'd say he's just the kind of guy this country needs, and I really mean that.
Ali Wyne: That was not Hillary Clinton. It was an AI-generated deepfake video, but it sounded so realistic that Reuters actually investigated it to prove that it was bogus. Could governments use techniques such as this one to spread false narratives in adversary nations or to influence the outcomes of elections? The answer is yes, and they already do. In fact, in a growing digital ecosystem, there are a wide range of ways in which governments can manipulate the information environment to push particular narratives.
Welcome to Patching the System, a special podcast from the Global Stage series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us. Today, we're looking at the growing threat of foreign influence operations, state-led efforts to misinform or distort information online.
Joining me now are Teija Tiilikainen, Director of the European Center of Excellence for Countering Hybrid Threats, and Clint Watts, General Manager of the Microsoft Threat Analysis Center. Teija, Clint, welcome to you both.
Teija Tiilikainen:Thank you.
Clint Watts: Thanks for having me.
Ali Wyne: Before we dive into the substance of our conversation, I want to give folks an overview of how both of your organizations fit into this broader landscape. So Clint, let me turn to you first. Could you quickly explain what it is that Microsoft's Threat Analysis Center does, what its purpose is, what your role is there, and how does its approach now differ from what Microsoft has done in the past to highlight threat actors?
Clint Watts: Our mission is to detect, assess and disrupt malign influence operations that affect Microsoft, its customers and democracies. And it's quite a bit different from really how Microsoft has handled it up until we joined. We were originally a group called Miburo and we had worked in disrupting malign influence operations until we were acquired by Microsoft about 15 months ago.
And the idea behind it is we can connect what's happening in the information environment with what's happening in cyberspace and really start to help improve the information integrity and ecosystem when it comes to authoritarian nations that are trying to do malign influence attacks. Every day, we're tracking Russia, Iran, and China worldwide in 13 languages in terms of the influence operations they do. That's a combination of websites, social media hack and leak operations is a particular specialty of ours where we work with the Microsoft Threat Intelligence Center. They see the cyberattacks and we see the alleged leaks or influence campaigns on social media, and we can put those together to do attribution about what different authoritarian countries are doing or trying to do to democracies worldwide.
Ali Wyne: Teija, I want to pose the same question to you because today might be the first that some of our listeners are hearing of your organization. What is the role of the European Center of Excellence for Countering Hybrid Threats?
Teija Tiilikainen: So this center of excellence is an intergovernmental body that has been established originally six years ago by nine governments from various EU and NATO countries, but today covers 35 governments. So we cover 35 countries, EU and NATO allies, all, plus that we cooperate closely with the European Union and NATO. Our task is more strategic, so we try to analyze the broad hybrid threat activity and with hybrid threats, we are referring to unconventional threat forms, election interference, attacks against critical infrastructures, manipulation of the information space, cyber and all of that. We create capacity, we create knowledge, information about these things and try to provide recommendations and share best practices among our governments about how to counter these threats, how to protect our societies.
Ali Wyne: We're going to be talking about foreign influence operations throughout our conversations, but just first, let's discuss hybrid conflicts such as what we're seeing and what we have seen so far in Ukraine. And I'm wondering how digital tactics and conflicts have evolved. Bring us up to speed as to where we are now when it comes to these digital tactics and conflicts and how quickly we've gotten there.
Teija Tiilikainen: So it's easier to start with our environment where the societies rely more and more on digital solutions, information technology that is a part of our societies. We built that to the benefit of our democracies and their economies. But in this deep conflict where we are, conflict that has many dimensions, one of them being the one between democracies and authoritarian states, that changes the role of our digital solutions and all of a sudden we see how they have started to be exploited against our security and stability. So it is about our reliance on critical infrastructures, there we have digital solutions, we have the whole information space, the cyber systems.
So this is really a new - strengthening new dimension in the conflicts worldwide where we talk more and more about the need to protect our vulnerabilities and the vulnerabilities more and more take place in the digital space, in the digital solutions. So this is a very kind of technological instrument, more and more requiring advanced solutions from our societies, from us, very different understanding about threats and risks. If we compare that with the kind of more traditional one where armed attacks, military operations used to form the threat number one, and now it is about the resilience of our critical infrastructures, it is about the security and safety of our information solutions. So the world looks very different and ongoing conflicts do that as well.
Ali Wyne: And that word, Teija, that you use, resilience, I suspect that we're going to be revisiting that word and that idea quite a bit throughout our conversation. Clint, let me turn to you now and ask how foreign influence operations in your experience, how have they evolved online? How are they conducted differently today? Are there generic approaches or do different countries execute these operations differently?
Clint Watts: So to set the scene for where this started, I think the point to look at is the Arab Spring. When it came to influence operations, the Arab Spring, Anonymous, Occupy Wall Street, lots of different political movements, they all occurred roughly at the same time. We often forget that. That's because social media allowed people to come together around ideas, organize, mobilize, participate in different activities. That was very significant, I think, for nation states, but one in particular, which was Russia, which was a little bit thrown off by, let's say, the Arab Spring and what happened in Egypt, for example. But at the same point intrigued about what would the ability to be able to go into a democracy and infiltrate them in the online environment and then start to pit them against each other.
That was the entire idea of their Cold War strategy known as active measures, which was to go into any nation, particularly in the United States or any set of alliances, infiltrate those audiences and win through the force of politics rather than the politics of force. You can't win on the battlefield, but you can win in their political systems and that was very, very difficult to do in the analog era. Fast-forward to the social media age, which we saw the Russians do, was take that same approach with overt media, fringe media or semi-covert websites that look like they came from the country that was being targeted. And then combine that with covert troll accounts on social media that look like and talk like the target audience and then add the layer that they could do that no one else had really put together, which was cyberattacks, stealing people's information and timing the leaks of information to drive people's perceptions.
That is what really started about 10 years ago, and our team picked up on it very early in January 2014 around the conflict in Syria. They had already been doing it in the conflict in Ukraine 10 years ago, and then we watched it move towards the elections: Brexit first, the U.S. election, then the French and German elections. This is that 2015, '16, '17 period.
What's evolved since is everyone recognizing the power of information and how to use it and authoritarians looking to grab onto that power. So Iran has been quite prolific in it. They have some limitations, they have resource issues, but they still do some pretty complex information attacks on audiences. And now China is the real game changer. We just released our first East Asia report where we diagrammed what we saw as an incredible scaling of operations and centralization of it.
So looking at how to defend, I think it's remarkable that, like Teija mentioned, in Europe, a lot of countries that are actually quite small, Lithuania, for example, has been able to mobilize their citizens in a very organized way to help as part of the state's defense, come together with a strategy, network people to spot disinformation and refute it if it was coming from Russia.
In other parts of the world though, it's been much, much tougher, particularly even the United States where we've seen the Russians and other countries now infiltrate into audiences and you're trying to figure out how to build a coherent system around defending when you have a plurality of views and identities and different politics. It's actually somewhat more difficult, I think, the larger a country is to defend, particularly with democracies.
And then the other thing is you've seen the resilience and rebirth of alliances and not just on battlefields like NATO, but you see NATO, the EU, the Hybrid CoE, you see these groups organizing together to come through with definitions around terminology, what is a harm to democracy, and then how best to combat it. So it's been an interesting transition I would say over the last six years and you’re starting to see a lot of organization in terms of how democracies are going to defend themselves moving forward.
Ali Wyne: Teija, I want to come back to you. What impact have foreign influence operations had in the context of the war in Ukraine, both inside Ukraine but also globally?
Teija Tiilikainen: I think this is a full-scale war also in the sense that it is very much a war about narratives. So it is about whose story, whose narrative is winning this war. I must say that Russia has been quite successful with its own narrative if we think about how supportive the Russian domestic population still is with respect to the war and the role of the regime, of course there are other instruments also used.
Prior to the war started, Russia started to promote a very false story about what is going on in Ukraine. There was the argument about an ongoing Nazification of Ukraine. There was another argument about a genocide of the Russian minorities in Ukraine that was supposed to take place. And there was also a narrative about how Ukraine had become a tool in the toolbox of the Western alliance that is NATO or for the U.S. to exert its influence and how it was also used offensively against Russia. And these were of course all parts of the Russian information campaign – disinformation - with which it justified – legitimized - its war.
If we take a look at the information space in Europe or more broadly in Africa, for instance, today we see that the Western narrative about the real causes of the war, how Russia violated international law, the integrity and sovereignty of Ukraine, how this kind of real fact-based narrative is not doing that well. This proves the strength of foreign influence operations when they are strategic, well-planned, and of course when they are used by actors such as Russia and China that tend to cooperate and also outside the war zone, China is using this Russian narrative to put the blame for the war on the West and present itself as a reliable international actor.
So there were many elements in the war, not only the military activity, but I would in particular want to emphasize the role of these information operations.
Ali Wyne: It's sobering not only thinking about the impact of these disinformation operations, these foreign influence operations, but also, Teija, you mentioned the ways in which disinformation actors are learning from one another and I imagine that that trend is going to grow even more pronounced in the years and decades ahead. So thank you for that answer. Clint, from a technical perspective, what goes into recognizing information operations? What goes into investigating information operations and ultimately identifying who's responsible?
Clint Watts: I think one of the ironies of our work is, from a technical perspective, influence operations are detected by people. One of the big differences, especially we work with MSTIC, the cyber team and our team, is that our work of detecting malign influence operations is really about a human problem powered by technology. And that if you want to be able to understand and get your lead, we work more like a newspaper in many ways. We have a beat that we're covering, let's say, it's Russian influence operations in Africa. And we have real humans, people with master's degrees speak the language. I think the team speaks 13 languages in total amongst 28 of us. They sit and they watch and they get enmeshed in those communities and watch the discussions that are going on.
But ultimately we're using some technical skills, some data science to pick up those trends and patterns because the one thing that's true of influence operations across the board is you cannot influence and hide forever. Ultimately your position or, as Teija said, your narratives will track back to the authoritarian country - Russia, Iran, China - and what they're trying to achieve. And there's always tells common tells or context, words, phrases, sentences are used out of context. And then you can also look at the technical perspective. Nine years ago when we came onto the Russians, the number one technical indicator of Russian accounts was Moscow time versus U.S. time.
Ali Wyne: Interesting.
Clint Watts: They worked in shifts. They were posing as Americans, but talking at 2:00 AM mostly about Syria. And so it stuck out, right? That was a contextual thing. You move though from those human tips and insights, almost like a beat reporter though, to using technical tools. That's where we dive in. So that's everything from understanding associations of time zones, how different batches of accounts and social media might work in synchronization together, how they'll change from topic time and time again. The Russians are a classic of wanting to talk about Venezuela one day, Cuba the next, Syria the third day, the U.S. election the fourth, right? They move in sequence.
And so I think when we're watching people and training them, when they first come on board, it's always interesting. We try and pair them up in teams of three or four with a mix of skills. We have a very interdisciplinary set of teams. One will be very good in terms of understanding cybersecurity and technical aspects. Another one, a data scientist. All of them can speak a language and ultimately one is a former journalist or an international relations student that really understands the region and it's that team environment working together in person that really allows us to do that detection but then use more technical tools to do the attribution.
Ali Wyne: So You talked about identifying and attributing foreign influence operations and that's one matter, but how do you actually combat them? How do you combat those operations and what role, if any, can the technology industry play in combating them?
Clint Watts: So we're at a key spot, I think, in Microsoft in protecting the information environment because we do understand the technical signatures much better than any one government could probably do or should. There are lots of privacy considerations, we take it very serious at Microsoft, about maintaining customer privacy. At the same point, the role that tech can do is illustrative in some of our recent investigations, one of them where we found more than 30 websites, which were being run out of the same three IP addresses and they were all sharing content pushed from Beijing, but to local environments and local communities don't have any idea that those are actually Chinese state-sponsored websites.
So what we can do being part of the tech industry is confirm from a technical perspective that all of these things and all of this activity is linked together. I think that's particularly powerful. Also in terms of the cyber and influence convergence, we would say, we can see a leak operation where an elected official in one country is targeted as part of a foreign cyberattack for a hack and leak operation. We can see where the hack occurred. If we have good attribution on it, Russia and Iran in particular, we have very strong attribution on that and publish on it frequently, but then we can match that up with the leaks that we see coming out and where they come out from. And usually the first person to leak the information is in bed with the hacker that got the information. So that's another role that tech can play in particular about awareness of who the actors are, but what the connections are between one influence operation and one cyberattack and how that can change people's perspectives, let's say, going into an election.
Ali Wyne: Teija, I want to come back to you to ask you about a dilemma that I suspect that you and your colleagues and everyone who's operating in this space, a dilemma that I think many people are grappling with. And I want to put the question to you. I think that one of the central tensions in combating disinformation of course is preserving free speech in the nations where it exists. How should democracies approach that balancing act?
Teija Tiilikainen: This is a very good question and I think what we should keep in mind is what the malign actors are striving at is that they would like to see us starting to compromise our values, question our own values. So open society, freedom of speech, rule of law, democratic practices, and the principle of democracy, we should stick to our values, we should make sure that we are not going to the direction where the malign actors would want to steer us. But it is exactly as you formulate the question, how do we make sure that these values are not exploited against our broad societal security as is happening right now?
So of course there is not one single solution. The technological solution certainly can help us protect our society, broad awareness in society about these types of threats. Media literacies is the kind of keyword many times mentioned in this context. A totally new approach to the information space is needed and can be achieved through education, study programs, but also by supporting the quality media and the kind of media that relies on journalistic ethics. So we must make sure that our information environment is solid and that also in the future we'll have the possibility to make a distinction between disinformation and facts because it is - distinction is getting very blurred in a situation where there is a competition about narratives going on. Information has become a tool in many different conflicts that we have in the international space, but also in the domestic level many times.
I would like to offer the center's model because it's not only that we need cooperation between private actors, companies, civil society actors and governmental actors in states. We also need firm cooperation among like-minded states, sharing of best practices, learning. We can also learn from each other. If the malign actors do that, we should also take that model into use when it comes to questions such as how to counter, how to build resilience, what are the solutions we have created in our different societies? And this is why our center of excellence has been established exactly to provide a platform for that sharing of best practices and learning from each other.
So it is a very complicated environment in terms of our security and our resilience. So we need a multiple package of tools to protect ourselves, but I still want to stress the - our values and the very fact that this is what the malign actors would like to challenge and want us to challenge as well. So, let's stick to them.
Ali Wyne: Clint, let me come back to you. So we are heading into an electorally very consequential year. And perhaps I'm understating it, 2024 is going to be a huge election year, not only for the United States but also for many other countries where folks will be going to polls for the first time in the age of generative artificial intelligence. Does that fact concern you and how is artificial intelligence changing this game overall?
Clint Watts: Yeah, so I think it's too early for me to say as strange as it is. I remind our team, I didn't know what ChatGPT was a year ago, so I don't know that we know what AI will even be able to do a year from now.
Ali Wyne: Fair point, fair point.
Clint Watts: In the last two weeks, I've seen or experimented with so many different AI tools that I just don't know the impact yet. I need to think it through and watch a little bit more in terms of where things are going with it. But there are a few notes that I would say about elections and deepfakes or generative AI.
Since the invasion of Ukraine, we have seen very sophisticated fakes of both Zelensky and Putin and they haven't worked. Crowds, when they see those videos, they're pretty smart collectively about saying, "Oh, I've seen that background before. I've seen that face before. I know that person isn't where they're being staged at right now." So I think that is the importance of setting.
Public versus private I think is where we'll see harms in terms of AI. When people are alone and AI is used against them, let's say, a deepfake audio for a wire transfer, we're already seeing the damages of that, that's quite concerning. So I think from an election standpoint, you can start to look for it and what are some natural worries? Robocalls to me would be more worrisome really than a deepfake video that we tend to think about.
The other things about AI that I don't think get enough examination, at least from a media perspective, everyone thinks they'll see a politician say something. Your opening clip is example of it and it will fool audiences in a very dramatic way. But the powers of AI in terms of utility for influence operations is mostly about understanding audiences or be able to connect with an audience with a message and a messenger that is appropriate for that audience. And by that I mean creating messages that make more sense.
Part of the challenge for Russia and China in particular is always context. How do you look like an American or a European in their country? Well, you have to be able to speak the language well. That's one thing AI can help you with. Two, you have to look like the target audience to some degree. So you could make messengers now. But I think the bigger part is understanding the context and timing and making it seem appropriate. And those are all things where I think AI can be an advantage.
I would also note that here at Microsoft, my philosophy with the team is machines are good at detecting machines and people are good at detecting people. And so there are a lot of AI tools we're already using in cybersecurity, for example, with our copilots where we're using AI to detect AI and it's moving very quickly. As much as there's escalation on the AI side, there's also escalation on the defensive side. I'm just not sure that we've even seen all the tools that will be used one year from now.
Ali Wyne: Teija, let me just ask you about artificial intelligence more broadly. Do you think that it can be both a tool for combating disinformation and a weapon for promulgating disinformation? How do you view artificial intelligence broadly when it comes to the disinformation challenge?
Teija Tiilikainen: I see a lot of risks. I do see also possibilities and artificial intelligence certainly can be used as resilience tools. But the question is more about who is faster if the malign actors take the full advantage of the AI before we find the loopholes and possible vulnerabilities. I think it's very much about a hardcore question to our democracies. The day when an external actor can interfere efficiently into our democratic processes, to elections, to election campaigns, the very day when we cannot any longer be sure that what is happening in that framework is domestically driven, that they will be very dangerous for the whole democratic model, the whole functioning of our Western democracies.
And we are approaching the day and AI is, as Clint explained, one possible tool for malign actors who want to discredit not only the model, but also interfere into the democratic processes, affect outcomes of elections, topics of elections. So deepfakes and all the solutions that use AI, they are so much more efficient, so much faster, they are able to use so much - lots of data.
So I see unlimited possibilities unfortunately for the use of AI for malign purposes. So this is what we should focus on today when we focus on resilience and the resilience of our digital systems.
And this is also a highly unregulated field also at the international level. So if we think about weapons, if we think about military force, well, now we are in a situation of deep conflict, but before we were there we used to have agreements and treaties and conventions between states that regulated the use of weapons. Those agreements are no longer in a very good shape. But what do we have in the realm of cyber? This is at the international level a highly unregulated field. So there are many problems. So can only encourage and stress the need to identify the risks with these solutions. And of course we need to have regulation of AI solutions and systems in our states at the state level as well as hopefully at some point also international agreements concerning the use of AI.
Ali Wyne: I want to close by emphasizing that human component and ask you, as we look ahead and as we think about ways in which governments and private sector actors, individuals, and others in this ecosystem can be more effective at combating disinformation foreign influence operations, what kinds of societal changes need to happen to neutralize the impact of these operations? So talk to us a little bit more about the human element of this challenge and what kinds of changes need to happen at the societal level.
Teija Tiilikainen: I would say that we need a cultural change. We need to understand societal security very differently. We need to understand the risks and threats against societal security in a different way. And this is about education. This is about schools, this is about study programs at universities. This is about openness in media, about risks and threats.
But also in those countries that do not have the tradition. In the Nordic countries, here in Finland, in Scandinavia, we have a firm tradition of public-private cooperation when it comes to security policy. We are small nations and the geopolitical region has been unstable for a long time. So there is a need for public and private actors to share a same understanding of security threats and also cooperate to find common solutions. And I think I can only stress the importance of public-private cooperation in this environment.
We need more systematical forms of resilience. We have to ask ourselves what does resilience mean? Where do we start building resilience? Which are all the necessary components of resilience that we need to take into account? So that we have international elements, we have national elements, local elements, we have governmental and civil society parts, and they are all interlinked. There is no safe space anywhere. We need to kind of create comprehensive solutions that cover all possible vulnerabilities. So I would say that the security culture needs to be changed and it's not the security culture we tend to think about domestic threats and then international threats. Now they are part of the same picture. We tended to think about military, nonmilitary, also they are very much interlinked in this new technological environment. So new types of thinking, new types of culture. I would like to get back to university schools and try to engage experts to think about the components of this new culture.
Ali Wyne: Teija Tiilikainen, Director of the European Center of Excellence for Countering Hybrid Threats. Clint Watts, General Manager of Microsoft Threat Analysis Center. Teija, Clint, thank you both very much for being here.
Teija Tiilikainen: Thank you. It was a pleasure. Thank you.
Clint Watts: Thanks for having me.
Ali Wyne: And that's it for this episode of Patching the System. There are more to come. So follow Ian Bremmer's GZERO World feed anywhere you get your podcasts to hear the rest of our new season. I'm Ali Wyne. Thank you very much for listening.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
- Podcast: Cyber mercenaries and the global surveillance-for-hire market ›
- Podcast: How cyber diplomacy is protecting the world from online threats ›
- Attacked by ransomware: The hospital network brought to a standstill by cybercriminals ›
- Hacked by Pegasus spyware: The human rights lawyer trying to free a princess ›
- The threat of CEO fraud and one NGO's resilient response ›
- Why privacy is priceless - GZERO Media ›
- Would the proposed UN Cybercrime Treaty hurt more than it helps? - GZERO Media ›
- Podcast: Can governments protect us from dangerous software bugs? - GZERO Media ›
- Podcast: Cyber Mercenaries and the digital “wild west" - GZERO Media ›
Podcast: Cyber mercenaries and the global surveillance-for-hire market
Listen: The use of mercenaries is nothing new in kinetic warfare, but they are becoming a growing threat in cyberspace as well. The weapon of choice for cyber mercenaries is malicious spyware that undermines otherwise benign technologies and can be sold for profit. Luckily, awareness about this threat is also growing, and increasing global coordination efforts are being put forth to combat this dangerous trend.
In episode 2, season 2 of Patching the System, we're focusing on the international system of bringing peace and security online. In this episode, we look at what governments and private enterprises are doing to combat the growth of the cyber mercenary industry.
Our participants are:
- Eric Wenger, senior Director for Technology Policy at Cisco
- Stéphane Duguin, CEO of the CyberPeace Institute
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
TRANSCRIPT: Cyber mercenaries and the global surveillance-for-hire market
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
Eric Wenger: There's no phishing or fooling of the user into installing something on their device. This technology is so powerful that it can overcome the defenses on a device. So this is a tool that is on the level of sophistication with a military grade weapon and needs to be treated that way.
Stéphane Duguin: What we're facing is a multifaceted threat with a loose network of individuals, financiers, and companies which are playing a link in between states when it comes to a deployment of these surveillance capabilities. So if you want to curb this kind of threats, you need to act as a network.
Ali Wyne: In the ongoing war in Ukraine, both sides have employed mercenaries to supplement and fortify their own armies. Now, guns for hire are nothing new in kinetic warfare, but in cyberspace, mercenaries exist as well to augment government capabilities and their weapon of choice is malicious spyware that undermines peaceful technology, and which can be sold for profit. Today we'll enter the world of cyber mercenaries and the work that's being done to stop them.
Welcome to Patching The System, a special podcast from the Global Stage series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us. In this episode, we're looking at the latest in cyber mercenaries and what's being done to stop them. Last season we spoke to David Agranovich, director of Global Threat Disruption at Meta, about what exactly it is that cyber mercenaries do.
David Agranovich: These are private companies who are offering surveillance capabilities, which once were essentially the exclusive remit of nation state intelligence services, to any paying client. The global surveillance for hire industry, for example, targets people across the internet to collect intelligence, to try and manipulate them into revealing information about themselves and ultimately to try and compromise their devices, their accounts, steal their data.
Ali Wyne: And since then, awareness has grown and efforts to fight these groups have been fast tracked. In March of this year, the Tech Accord announced a set of principles specifically designed to curb the growth of the cyber mercenary market, which some estimate to be more than $12 billion globally. That same month, the White House issued an executive order to prohibit the U.S. government from using commercial spyware that could put national security at risk, an important piece of this cyber mercenary ecosystem.
On the other side of the Atlantic, a European Parliament committee finalized a report on the use of spyware on the continent and made recommendations for regulating it. And most recently, bipartisan legislation was introduced in the United States to prohibit assistance to foreign governments that use commercial spyware to target American citizens.
Are all of these coordinated efforts enough to stop the growth of this industry? Today I'm joined by Eric Wenger, senior Director for Technology Policy at Cisco, and Stéphane Duguin, CEO of the CyberPeace Institute. Welcome to you both.
Eric Wenger: Thank you.
Stéphane Duguin: Thank you.
Ali Wyne: Now, I mentioned this point briefly in the introduction, but I'd love to hear more from both of you about specific examples of what it is that cyber mercenaries are doing. What characterizes their work, especially from the latest threats that you've seen?
Stéphane Duguin: It's important maybe to start with a bit of definition of what are we talking about when we talk about cyber mercenaries. So interestingly, there is the official definition and what we all mean. Official definition - you can find this in the report to the general assembly of the United Nation, where it's really linked to private actors that can be engaged by states and non-state actors. It's really about the states taking action to engage someone, to contract someone in order to look into cyber operations in the context of an armed conflict.
I would argue that for this conversation, we need to look at the concept of cyber mercenaries wider and look at this as a network of individuals, of companies, of financial tools, of specific interest to at the end of the day, ensure global insecurity. Because all of this is about private sector entities providing their expertise, their time, their tool to governments to conduct clearly at scale an illegal, unethical surveillance. And to do this investment - money - needs to pour into a market, because it's a market which finances what? Global insecurity.
Eric Wenger: I would add that there's another layer to this problem that needs to be put into context, and that is, Stéphane correctly noted, that these are private sector entities and that their customers are governments that are engaged in some sort of activity that is couched in terms of protecting safety or national security. But the companies themselves are selling technology that is regulated and therefore is being licensed from a government as well too. I think that's really the fascinating dynamic here is that you have a private sector intermediary that is essentially involved in a transaction that is from one government to another government with that private sector actor in the middle being the creator of the technology, but it is subject to a license by one government for a sale to another government.
Ali Wyne: This market is obviously growing quickly, and I mentioned in my introductory remarks that $12 billion global figure, so obviously there's a lot of demand. From what you've seen, who are the customers and what's driving the growth of this industry?
Eric Wenger: Well, the concerning part of the story is that there have been a number of high profile incidents that have indicated these technologies are being used not just to protect against attacks on a nation, but in order to benefit the stability of a regime. And in that context, what you see are journalists being the subject of the use of these technologies or dissidents, human rights activists. And that's the part that really strikes me as being quite disturbing. And it is frankly the hardest part of this problem to get at because as I noted before, if you have these private sector actors that are essentially acting as intermediaries between governments, then it's hard to have a lot of visibility from the outside of this market into what are the justifications that are enabling sales. Who is this technology going to? How is it being used and how is it potentially being checked in order to address the human rights concerns that I've flagged here?
Ali Wyne: Stéphane, let me come back to you. So you used to work in law enforcement and given your law enforcement background, one question that one might ask is why shouldn't governments be taking advantage of cyber mercenaries if they are making tools that help to, for example, track down terrorists or otherwise fight crime and improve national defense? Why shouldn't governments be taking advantage of them?
Stéphane Duguin: Something that is quite magical about law enforcement, it's about enforcing the law. And in this case, there's clear infringement all over the place. Let's look into the use cases that we know about. So when it comes to law, what kind of judicial activities have been undertaken after the use, sale or export of these kinds of tools? So there's this company, Amesys, which is now sued for complicity in acts of torture, over sales of surveillance technologies to Libya. You have these cases of dissident that has been arrested in Egypt in the context of the acquisition of the Predator tool. More recently we've seen what happened in Greece with this investigation around the surveillance of critics and opponents. And you can add an add on example. This has nothing to do with law enforcement.
So my experience in law enforcement is that you have a case, when you have a case, you have an oversight, a judicial oversight. I was lucky to work in law enforcement in Europe, so a democratic construct that goes under the oversight of parliament. Where is this construct where a private sector entity has free rein to research and develop, increase, export, exactly as was said before, in between state, a technology, which by the way is creating expertise within that same company for people that are going to sell this expertise left and right. Where is the oversight? And where are the rules that would put this into a normal law enforcement system?
And just to finish on this, I worked on investigating terrorist group and cyber gangs most of my career, and we can do cases, we can do very, very, very good cases. I would not admittedly say that the problem is about putting everyone under surveillance. The problem is more about investing resources in law enforcement and in the judicial system to make sure that when there's a case, there's accountability and redress and repair for victims. And these, do not need surveillance at scale.
Ali Wyne: Eric, Let me come back to you. So, I want to give folks who are listening, I want to give them a little bit of a sense of the size of the problem and to help put the size of the problem in perspective. So when we talk about cyber mercenaries, just how big is the threat from them and the organizations for which they work? And is that threat, is it just an annoyance or is it a real cause for concern? And who's most affected by the actions that they take?
Eric Wenger: We could talk about the size of the market and who is impacted by it. That's certainly part of the equation in trying to size the threat. But we also have to have a baseline understanding of what the technology is that we're talking about in order for people to appreciate why there's so much concern. And we're talking about exploits that can be sent from the deployer or the technology to a mobile device that's used by an individual or an organization without any action being taken by the user. There's nothing you have to click, there's nothing you have to accept. There's no phishing or fooling of the user into installing something on their device. This technology is so powerful that it can overcome the defenses on a device. And then that device is then completely compromised so that cameras can be turned on, files stored on the device can be accessed, microphones can be activated.
So this is a tool that is on the level of sophistication with a military grade weapon and needs to be treated that way. So the concern is the cutout of a private sector entity in between the government, and these are typically democratic governments that are licensing these technologies to other governments that wouldn't have the capabilities to develop these technologies on their own. And then once in their hands, it's difficult if not impossible, to make sure that they are used only within the bounds of whatever the original justification for it was.
So in theory you would say, let's say there was some concern about a terrorist operation that justified the access to this technology, which in that government's hands can be repurposed for other things that might be a temptation, which would include protecting of the stability of the regime by going after those who are critics or dissidents or journalists that are writing things that they view as being unhelpful to their ability to govern. And so those lines are very difficult to maintain with a technology that is so powerful that is in the hands of a government without the type of oversight that Stéphane was referencing before.
Ali Wyne: So Stéphane, let me come back to you. And just building off of the answer, Eric just gave, what groups and individuals are most at risk from this growing cyber mercenary market?
Stéphane Duguin: History showed that who has been targeted by the deployment of these tools and the activities of the cyber mercenaries are political opponents and journalists, human rights defenders, lawyer, government official, pro-democracy activists, opposition members, human right defenders and so on. So we are quite far from terrorists or organized crime, art criminals and the like.
And interestingly, it's not only that this profile of who is targeted gives a lot of information about the whole ethics and values that are underlying in this market ecosystem. But also what is concerning is that we know about this not from law enforcement or not from public sector entities which would investigate the misuse of these technologies and blow the whistle. We know about this thanks to the amazing work of a few organizations over the past a decade, like the Citizen Lab, Amnesty Tech who could track and demonstrate the usage, for example of FinFisher against pro-democracy activists in 2012, position members in 13, FinSpy afterwards, then it moved to Pegasus firm NSO.
Now we just have the whole explanation of what happened with the Predator. It's quite concerning that these activities that are at the core of abuse of human rights and of the most essential privacy are not only happening in the shadow as Eric was mentioning before, with a total asymmetry between the almost military grades of tools that is put in place and the little capacity for the target to defend themselves. And this is uncovered not by the people we entrust with our public services and enforcement of our rights, but by investigative groups, civil society, which are almost for a living now doing global investigation against the misuse of offensive cyber capabilities.
Ali Wyne: Your organization, the CyberPeace Institute, what is the CyberPeace Institute doing to combat these actors? And more broadly, what is the role of civil society in working to address this growing challenge of cyber mercenary actors?
Stéphane Duguin: What we're facing is a multifaceted threat with a loose network of individuals, financiers, companies which are playing a link in between states when it comes to a deployment of these surveillance capabilities. So if you want to curb this kind of threats, you need to act as a network. So the role of the CyberPeace Institute among other civil society organizations is to put all together the capable and the willing so that we can look at the whole range of issues we're facing.
One part of it is the research and development and deployment of these tools. The second part is the detection of their usage. Another part is looking into the policy landscape and informed policymaking and demonstrating that some policies has been violated, export control when it comes to the management of these tools. Another part of the work is about measuring the human harm of what these tools are leading to.
So we, for example, at the CyberPeace Institute cooperated with the development of the Digital Violence Platform, which is showing the human impacts, for example, the usage of Pegasus on individual. We also are in the lead in one of the working groups of the Paris Peace Forum. We need to bring a multi-stakeholder community in a maturity level to understand exactly what this threat is costing to society and what kind of action we could take all together.
And we notably last year in the World Economic Forum, joined forces with Access Now, the official high commissioner for human rights, Human Rights Watch, Amnesty International and the International Trade Union Confederation and Consumer International, to call for a moratorium on the usage of these tools until we have the certainty that they are researched, deployed, exported, used with the proper oversight because otherwise the check and balance cannot work.
Ali Wyne: And you just mentioned Pegasus spyware and that kind of software has been getting more and more attention, including from policymakers. So Eric, let me come back to you now. What kinds of actions are governments taking to curb this market?
Eric Wenger: So as I noted before that this is an interesting combination of technology, of private sector entities that are creating the technology, the regulators who are in the governments where those companies are located who control the sale of the technology, and then the technology consumers who are, again, as Stéphane noted, other governments. And so it's this interesting blend of private and public sector actors that's going to require some sort of coordinated approach that runs across both. And I think you're seeing action in both of those spheres. In terms of private sector companies, Cisco, my employer, joined together with a number of other companies filing a friend of the court or amicus brief in litigation that had been brought by what was then Facebook, now Meta, against a company that was deploying technology that had hacked into their WhatsApp software. And in that case we joined together with a number of other companies, I believe it was Microsoft and Dell and Apple and others who joined together in filing a brief in that case.
We of course come together under the umbrella of the Tech Accord and we can talk about the principles that we developed among the companies. I think there's 150 companies that joined ultimately in signing that document in agreement that we have concerns that there are things we want to do in a concerted way to try to get at this market so that it doesn't cause the kinds of impacts that Stéphane talked about before.
Again, there's clearly a strong government to government piece of this that needs to be taken on. And then Stéphane also noted the Paris Peace Forum, and that this topic of how to deal with spyware and cyber mercenaries is going to be on the agenda there, which again is important because this is a government led forum, but it's one where you also see private sector and civil society entities actively engaged. Stéphane also mentioned the important work that's being done by Citizen Lab. And then we have threat intelligence researchers at Cisco that operate under the brand of Talos.
These are some of the most effective threat intelligence researchers in the world, and they're really interested in this problem as well too, and starting to work with people who suspect that their devices may have been compromised in this way to take a look at them and to help them.
And then the companies that make the cell phones and operating systems, Google and Apple for instance, have been doing important work about detecting these kinds of changes to the devices and then providing notice to those whose devices may have been impacted in these ways so that they are aware and are able to try to take further defensive measures. It's really quite an active space and as we've discussed here several times, it's one that will only be really effectively taken on through a concerted effort that runs across the government and private sector space. And again, also with civil society as well too.
Ali Wyne: Talk to us a little bit about what technology companies can do to shut down this market?
Eric Wenger: Yeah, it was natural that this would grow out of the Tech Accord, which itself was a commitment by companies to protect their customers against attacks that misuse technology that are coming from the government space. There was a recognition among our companies that yes, some of this is clearly most effectively addressed at that government to government level with awareness that's being created by civil society. But this is also a problem that relates to the creation of technology and the companies that are engaged in these business models are procuring and using technology that could be coming from companies that find this business model to be highly problematic.
And so that's essentially what we did is we sat down as a group and started to talk about what is the part of the problem that technology and the access to technology potentially contributes that we have some ability to make a difference on. And then agreeing amongst ourselves that the steps that we might be able to take to limit the proliferation of this technology and the market and the companies that are engaging in this type of business. And then that coming together with the work that's being done at the government to government level, hopefully will make a significant dent in the size of this market.
Ali Wyne: Stéphane, let me come back to you as promised. Whether it's governments, whether it's technology companies, what kinds of actions can these actors take to shut down this cyber mercenary market?
Stéphane Duguin: Eric listed a lot of what is happening in this space and it's very exhaustive and it tells you how complex the answer is. We try to put this into a framework that what is expected from states is regulation first. So regulation meaning having the regulation but implementing the regulation. And under the word regulation, I would even put the norm discussion where there's non-binding norms that have been agreed between states and some of them could be leveraged and operationalized in order to prevent such a proliferation because that's what we're talking about.
Another type of regulation that could be way better implemented is the expert control. For example, in the European Union, we at CyberPeace Institute were discussing this in the context of the PEGA Committee, so this work from the EU parliament when it comes to looking into the lawfulness and ethic use of these kinds of tools.
But also when we add this multi-stakeholder approach for the EU Cyber Agora to discuss the problematic and clearly the expert control needs to be put at another level of operationalization, so regulation. Then need to mean capacity to litigate. So to give the space and the means to your apparatus that is in the business of litigation.
So today, what do we have? For example, executive from Amesys and Nexa Technologies that were indicted for complicity in torture; NSO group which is facing multiple lawsuits by mostly civil society and corporate plaintiffs in various countries, but that's clearly not enough.
So this should be not only coming from civil society, journalists, plaintiff, but we should see some investigative capacity from states, meaning law enforcement, looking into this kind of misuse. The other part is attribution, like public attribution on what is happening. So who are the actors, what are these companies, how this network are working?
So we can see over time how the regulation, the litigation is having an impact on the ecosystem. Otherwise, it's like emptying the ocean with a spoon. So I guess you know the great work done by the community, so we mentioning it before the Citizen Lab, the Amnesty Tech, Access Now, the work of tons of other organizations, I don't want to forget anyone, is not going to scale to a level if policy makers do not do their job, which is what is policymaking in the criminal context? It is reducing the space that you give to criminals. And today in this context for cyber mercenaries, the space is way too big. So I would say around this regulation, litigation and public attribution, it's kind of a roadmap for government.
Ali Wyne: Eric, let me come back to you. And you already mentioned in one of your earlier answers, you talked about these principles that the Tech Accord came out with recently, just a few months ago, in fact, to oppose a cyber mercenary industry. And talk to us a little bit more about what exactly those principles entail and what their intended impact is.
Eric Wenger: Sure. Stéphane also makes an important point around the context of what governments can do. Things like putting companies that are of concern on the entity list to restrict their ability to license technology that they might need in order to build the tools that they are selling. But coming back to where companies like those who joined the Tech Accord can make a difference. I noted that these principles build on the cybersecurity Tech Accord's, founding commitments which are about building strong defense into our products, not enabling offensive use of our technologies, capacity building, in other words, helping the ability of governments to do the work that they need to protect their citizens and working together across these different domains with the private sector, the civil society and governments. These particular principles are aimed at this specific problem. And the idea is that we will collectively try to work together to take steps countering the use of the products that will harm people, and we can identify ways that we can actively counter the market.
One of the ways that we mentioned before is the participation in litigation where that's the appropriate step. We're also investing in cybersecurity awareness to customers so that they have more understanding of this problem. There are tools that are being built by the companies that are developing the operating systems on mobile devices that can, if you're in a highly vulnerable group like you're a journalist or a human rights dissident or a lawyer working in an oppressive legal environment, there are more defensive modes that some of these phones now enable. And then we're working to, and this is an example of our companies working together and on our own to protect customers and users by building up the security capabilities of our devices and products.
And then finally, we thought, Stéphane mentioned his role in law enforcement before, I also was a computer crime prosecutor at the Department of Justice. And it's really important for those who are conducting legitimate lawful investigations to have clear understandings of the processes that are used by companies to handle valid legal requests for information. And so that we built that into this set of principles as well too, that we're committed to where there are legal and lawful pathways to get information from a company's lawful intercept, compulsory access tools and things like that, that we are transparent about how we operate in those spaces and we clearly communicate what our processes for handling those kinds of demands from governments as well too.
Ali Wyne: Final question for both of you. What is the single most important step that societies can take to stop the work of cyber mercenaries?
Stéphane Duguin: Eric opened it very, very well in the sense of what we see as the ambition and the partnership, the activities are deployed both by civil society, by cooperation, Tech Accord is an excellent example, in order to curb these threats. And interestingly, maybe it also came from the fact that there was not so much push on the government side to do something at scale against that threat. So clearly today, who represents society and the need for society in this context with pushing the ball, is civil society, cooperation, academia. And I would say now government are starting to get the size of the problem. Something that Eric mentioned, I would like to build on it because it's about society, what the values that we believe in society, there's a need for law enforcement and a lot of law enforcement and judiciary, they want to work in a lawful way. That's the vast majority, at least from the law enforcement that I can relate to when it comes to Europe, where I worked.
In this context, it's quite important that the framework is clear, the capacity are there, the resource are there, so that it doesn't give so much of a space for these cyber missionaries to impose themselves as the go-to platform, the place where solution can be engineered because there's nothing else out there. Something else, a society has to make a choice. Do we want to have such a market in proliferation without today, any check and balance, any oversight and it's just like the wild west of the surveillance? Or do we say stop at minimum to make a moratorium, to put in place some clear oversight processes, looking into what makes sense and what we can accept as a society before letting this go. And the last thing is to invest at best with the regulation that we're having, that we're going to have. This regulation, for example, now that under negotiation in the EU, like the AI Act or the Cyber Resilience Act or Cyber Solidarity Act, it would not take much to have this regulation also looking into not only what makes system insecure, but also who is trying to make system insecure.
Ali Wyne: Eric, let me come to you to close us out and put the same question to you. What is the single most important step that societies can take to stop the work of cyber mercenaries?
Eric Wenger: Well, I'd love to say it was one thing, but it really is going to be a combination of things that come together as one maybe. And that's really going to involve this dynamic where the governments that are regulating access to the market of this technology, the governments that are... It may not be reasonable to expect that the governments that want to consume this technology will come to the table, but certainly the governments that have control over the markets where the technology is being developed, working together. And so as Stéphane mentioned, the United States government, the French government, the UK government have really all been out in front on this.
Those governments and others that share the concerns coming together with the experts in the threat intelligence space in academia, in civil society, in companies, and then companies that supply technologies that are critical, foundational elements of the ability of companies who are developing these technologies to engage in the market, also have an important role to play. And I think that's what we're bringing to the equation for the first time.
So it's this combination of actors that are coming together, recognizing that it's a problem and agreeing that there's something that we all need to do together in order to take this on. It's really the only way that we can be effective at addressing the concerns that we've been discussing here today.
Ali Wyne: Eric Wenger, Senior Director for Technology Policy at Cisco. Stéphane Duguin, CEO of the CyberPeace Institute. Thank you both so much for speaking with me today.
Eric Wenger: Thank you for having us.
Ali Wyne: And that's it for this episode of Patching the System. There are more to come. So follow Ian Bremmer's GZERO World feed anywhere you get your podcast to hear the rest of this new season. I'm Ali Wyne. Thank you very much for listening.'
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
- Podcast: How cyber diplomacy is protecting the world from online threats ›
- Podcast: Cyber Mercenaries and the digital “wild west" ›
- Attacked by ransomware: The hospital network brought to a standstill by cybercriminals ›
- Hacked by Pegasus spyware: The human rights lawyer trying to free a princess ›
- The threat of CEO fraud and one NGO's resilient response ›
- Podcast: Foreign influence, cyberspace, and geopolitics - GZERO Media ›
- Why privacy is priceless - GZERO Media ›
- Would the proposed UN Cybercrime Treaty hurt more than it helps? - GZERO Media ›
- Podcast: Can governments protect us from dangerous software bugs? - GZERO Media ›
Brad Smith: Russia's war in Ukraine started on Feb 23 in cyberspace
Weeks before Russia invaded Ukraine, Microsoft was already helping the Ukrainians defend their cyberspace against Russian hackers, for instance by moving the government's physical servers into the cloud to avoid destruction by Russian missiles.
In the virtual world, like on the battlefield, "you've gotta disperse your defensive assets so they're not vulnerable to a single attack," Microsoft President Brad Smith says in a Global Stage livestream discussion at the World Economic Forum in Davos, "Crisis in a digital world," hosted by GZERO in partnership with Microsoft.
Then came defending Ukraine against Russian cyberattacks.
In cyberspace, Smith says the war really started on February 23, a day before Russia's land invasion, when Microsoft noticed some 300 coordinated attacks trying to take down Ukrainian government websites and banks via Microsoft's own data centers in Seattle.
Still, it worked. Why? Because "so far in this war, defense has proven to be stronger [than] offense, frankly, in almost every category, but especially when it comes to cyberspace."
Watch more of this Global Stage discussion: "Crisis in a digital world"
- What We're Watching: Cyberwarfare in Ukraine, Imran Khan in ... ›
- Why hasn't Ukraine suffered a debilitating Russian cyberattack ... ›
- Podcast: Cyber threats in Ukraine and beyond - GZERO Media ›
- How Russian cyberwarfare could impact Ukraine & NATO response ... ›
- A different Davos amid geopolitical conflicts and security issues - GZERO Media ›
- Microsoft president Brad Smith has a plan to meet the UN's goals - GZERO Media ›
- Russia freezing out Ukrainian civilians because it can't beat military, says Microsoft's Brad Smith - GZERO Media ›
- Tech innovation can outpace cyber threats, says Microsoft's Brad Smith - GZERO Media ›
Podcast: Cyber Mercenaries and the digital “wild west"
Listen: The concept of mercenaries, hired soldiers and specialists working privately to fight a nation’s battles, is nearly as old as war itself.
In our fourth episode of “Patching the System,” we’re discussing the threat cyber mercenaries pose to individuals, governments, and the private sector. We’ll examine how spyware used to track criminal and terrorist activity around the world has been abused by bad actors in cyber space who are hacking and spying activists, journalists, and even government officials. And we’ll talk about what’s being done to stop it.
Our participants are:
- John Scott-Railton, Senior Researcher at the Citizen Lab at the University of Toronto's Munk School
- David Agranovich, Director of Global Threat Disruption at Meta.
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
Podcast: Cyber Mercenaries and the digital “wild west"
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
John Scott-Railton:You go to a growing number of mercenary spyware companies and surveillance companies that basically offer you NSA-style capabilities in a box and say, "Look, you can pay us a certain amount of money and we're going to send you this stuff." You're seeing basically the direct proliferation, not only of those capabilities, but actually national security information about how to do this kind of hacking moving its way right into the private sector.
David Agranovich: They fill a niche in the market, nation states that lack surveillance capabilities themselves, threat actors who want deniability in their surveillance activities and clients like law firms or litigants who want an edge on their competition. In reality, the industry is putting a thin veneer of professionalism over the same type of abusive activity that we would see from other malicious hacking groups.
INTERVIEW
Ali Wyne: Welcome to Patching the System, a special podcast for the Global Stage series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a Senior Analyst at Eurasia Group.
Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for are all of us. And today we're talking about mercenaries and the concept is almost as old as warfare itself. Hired guns, professional soldiers used in armed conflict. From Germans employed by the Romans in the fourth century to the Routiers of the Middle Ages, to modern day security firms whose fighters have been used in the Iraq and Afghanistan wars, as well as the current war in Ukraine.
But our conversation today is about cyber mercenaries. Now these are financially motivated private actors working in the online world to hack, to attack and to spy on behalf of governments. And in today's world where warfare is increasingly waged in the digital realm, nations use all the tools of their disposal to monitor criminal and terrorist activity online.
Now that includes spyware tools such as Pegasus, a software made by the Israel-based cyber security firm NSO Group that is designed to gain access to smartphone surreptitiously in order to spy on targets. But that same software, which government organizations around the world have used to attract terrorists and criminals has also been used to spy on activists, journalists, even officials with the U.S. State Department.
Here to talk more about the growing world of cyber mercenaries and the tech tools they use and abuse are two top experts in the field, John Scott-Railton or JSR, he's a Senior Researcher at the Citizen Lab at the University of Toronto's Munk School and David Agranovich, who now brings his years of experience in the policy space to his role as Director of Global Threat Disruption at Meta. Welcome to both of you.
JSR: Good to be here.
David Agranovich: Thanks for having us.
Ali Wyne: JSR, I'm going to start with you. So I mentioned in my introductory remarks, this Pegasus software. So tell us a little bit more about that software produced by the NSO Group and how it illustrates the challenges that we're here to talk about today?
JSR: So you can think of Pegasus as something like a service, governments around the world have a strong appetite to gain access to people's devices and to know what they're typing in and chatting about in encrypted ways. And Pegasus is a service to do it. It's a technology for infecting phones remotely, increasingly with zero-click vulnerabilities. That means accessing the phones without any deception required, nobody needs to be tricked into clicking a link or opening an attachment. And then to turn the phone into a virtual spy in the person's pocket. Once a device is infected with Pegasus, it can do everything that the user can do and some things that the user can't. So it can siphon off chats, pictures, contact lists but also remotely enable the microphone and the video camera to turn the phone into a bug in a room, for example. And it can do something else, which is it can take the credentials the user and the victim use to access their cloud accounts and siphon those away too and use those even after the infection is long gone to maintain access to people's clouds.
So you can think of it as a devastating and total access to a person's digital world. NSO, of course, is just one of the many companies that makes this kind of spyware. We've heard a lot about them, in part, because there's just an absolute mountain of abuse cases. Some of them discovered by myself and my colleagues around the world with governments acquiring this technology, perhaps some of the rubric of doing anti-terror or criminal investigation but of course they wind up conducting political espionage, monitoring journalists and others.
Ali Wyne: David, let me come to you. So I think that we should just, before we dive into the deeper conversation, getting a little bit into semantics, a little bit into nomenclature. But let's just start with some basic definitions. When most folks hear the phrase cyber mercenary, some of them might just think it's any kind of bad actor, hacker, others of them might draw parallels to real life, analog kind of mercenaries, so sort of hired soldiers in war. So how do you define the phrase cyber mercenary? How does Meta define the term cyber mercenary and why?
David Agranovich: So maybe just to ground ourselves in definitions a bit. My team at Meta works to coordinate disruption and deterrence of a whole ecosystem of adversarial threat actors online. And so that can include things like info ops, efforts to manipulate and corrupt public debate through fake personas. It can include cyber espionage activity, which is similar to what we're talking about today. Efforts to hack people's phones, email addresses, devices and scaled spamming abuse. When we're talking about cyber mercenary groups, I think of that within the broader cyber espionage space. There are people who are engaged in, as JSR talked about, surveillance, efforts to try and collect info on people to hack their devices, to gain access to private information across the broader internet. These are private companies who are offering surveillance capabilities, which once we're essentially the exclusive remit of nation state intelligence services, to any paying client.
The global surveillance-for-hire industry, for example, targets people across the internet to collect intelligence, to try and manipulate them into revealing information about themselves and ultimately to try and compromise their devices, their accounts, steal their data. They'll often claim that their services and the surveillance ware that they build are intended to focus on criminals, on terrorists. But what our teams have found and groups doing the incredible work like Citizen Lab is that they're regularly targeting journalists, dissidents, critics of authoritarian regimes, the family of opposition figures and human rights activists around the world.
These companies are part of a sprawling industry that provides these intrusive tools and surveillance services, indiscriminately to any customer, regardless of who they're targeting or the human rights abuses that they might enable.
Ali Wyne: What strikes me just in listening to your response is not only how vast, how sprawling this industry is, also how quickly it seems to have risen up. I think that just comparing the state of this industry today versus even 10 years ago or even five years ago. How did it rise up? What are some of the forces that are propelling its growth and give us a sense of it, the origin story and what the current state of play of this industry is today?
David Agranovich: As we see it, these firms grew out of essentially two principal factors. The first impunity and the second a demand for sophisticated surveillance capabilities from less sophisticated actors.
On the first point, companies like NSO or Black Cube or those that we cited in our investigative report from December last year, they wouldn't be able to flagrantly violate the privacy of innocent people if they faced real scrutiny and costs for their actions. But also, to that second point, they fill a niche in the market, nation states that lack surveillance capabilities themselves, threat actors who want deniability in their surveillance activities and clients like law firms or litigants who want an edge on their competition. In reality, the industry is putting a thin veneer of professionalism over the same type of abusive activity that we would see from other malicious hacking groups.
Ali Wyne: So JSR, so I want to come to you now. So David has kind of given us this origin story and he has given us a state of play and has really given us a sense of how sprawling this industry is. So, I guess, for lack of a better phrase, there are jobs here, there are jobs in this space. Who's hiring these cyber mercenaries and for what purposes? Who are they targeting?
JSR: There are a lot of jobs. And I think what's interesting, David pointed out the problem about accountability. And I think that's exactly right. Right now, you have an ecosystem that is largely defined only by what people will pay for, which is a seemingly endless problem set. So who's paying? Well, you have a lot of governments that are looking for this kind of capability that can't develop it endogenously and so go onto the market and look for it. I think even after the Snowden revelations, a lot of governments were like, "Man, I wish I had that stuff. How do we get that?" And the answer is increasingly simple. You go to a growing number of mercenary spyware companies and surveillance companies that basically offer you NSA-style capabilities in a box and say, "Look, you can pay us a certain amount of money and we're going to send you this stuff."
And as David points out, a lot of it is done under the sort of rhetorical flag of convenience of saying, "Well, this is stuff for tracking terrorists and criminals." But actually at this point, we probably have more evidence of abuses than we do confirmed cases where this stuff has been used against criminals. Who's doing the work? A lot of the people who go into this industry are hired by companies with names like NSO, Candiru and others. Many of them come out of government, they come out of either doing their military service in a place like Israel in a unit that focuses on cyber warfare or they come out of places like the CIA, the NSA, Five Eyes and other countries' intelligence services.
Which in itself is really concerning because you're seeing basically the direct proliferation, not only of those capabilities, but actually national security information about how to do this kind of hacking moving its way right into the private sector. And we've seen some really interesting cases in the last year of people who came out of The US intelligence community, for example, doing exactly this kind of thing and then pretty recently getting indicted for it. And so my hope is that we're beginning to see a bit of accountability around this but it's a really concerning problem set in part because the knowledge is specialized, a lot of it relates to countries' national security and it's now flowing into a big, sprawling unregulated marketplace.
Ali Wyne: So David, let's build on what JSR just said. So we have this big, sprawling, it seems increasingly unregulated surveillance ecosystem. It's more democratized, they are more individuals who can participate, the surveillance is getting more sophisticated. So I want to go back to your day job. Honestly, you have a big purview, you head up Global Threat Disruption at Meta, which is responsible for a very wide range of platforms. Which groups do you see in your personal capacity, in your professional capacity at Meta, which groups do you see as being most vulnerable to the actions of cyber mercenaries?
David Agranovich: So I think what's remarkable about these types of cyber mercenary groups, as JSR has noted I think, is just how indiscriminate their targeting is across the internet and how diverse that targeting is across multiple different internet platform. When we released our research report into seven cyber mercenary entities last year, we found that the targets of those networks ranged from journalists and opposition politicians to litigants and lawsuits to democracy activists. That targeting wasn't confined to our platforms either. One of the most concerning trends that we saw across these networks and which Citizen Lab has done significant amount of investigative reporting into is the use of these types of technologies to target journalists, often in countries where press freedoms are at risk and the use of these types of technologies, not just to try and collect open source information about someone, but really trying to break into their private information to hack their devices.
Some of the capabilities that JSR mentioned about the Pegasus malware for example, are incredibly privacy intrusive. Ultimately the problem that I see here is these firms effectively obscure the identity of their clients. Which means anybody, authoritarian regimes, corrupt officials, any client willing to pay the money, can ostensibly turn these types of powerful surveillance tools on anyone that they dislike. And so to answer your question, who's most vulnerable? The reality is that anyone can be, it's why we have to take the activities of these types of firms so seriously.
Ali Wyne: So you both have given us a sense of, again, this really sprawling surveillance ecosystem, the growing range of targets, the growing democratization of this kind of nefarious activity. Can you give us a sense of what tactics you've seen lately that are new? I mean, when I think back to some of the earlier conversations we've had in this podcast series, some of the guests we've had have said, look, there are basic precautionary measures that all of us can take, whether we are a technology expert, such as yourselves or whether we're just a lay consumer.
So use different passwords for different platforms, taking basic steps to safeguard our information. But obviously I think that the pace at which individuals can adapt and the pace at which individuals can take preventative measures, I think is invariably going to be outstripped by the speed with which actors can adapt and find new ways of engaging in cyber mercenary activities. So in your time at Meta, have you seen new tactics being used by these groups in recent years and how are you tracking those and identifying them?
David Agranovich: So maybe just to ground our understanding of how these operations work.
Ali Wyne: Sure.
David Agranovich: How do these tactics fit across the taxonomy? We break these operations down into three phases, what we call The Surveillance Chain. The first phase called reconnaissance is essentially an effort by a threat actor to build a profile on their target through open source information. The second phase which we call engagement is where that threat actor starts to try and build a rapport with the target, with the goal of social engineering them into the final phase, which is exploitation. That final step, which most often happens off of our platform is where the target receives malware or a spearphishing link in an attempt to steal their counter data.
Generally, the way we see the tactics throughout these three phases play out is we'll see these operations use social media early in their targeting to collect information to build a profile in the reconnaissance phase or to try and engage with a target and build a rapport in the engagement phases. And then they'll attempt to divert their target to other platforms like malware riddled websites, for example, where they might try to get a target to download a Trojanized chat application that then delivers malware onto their device or other social media platforms where they'll try and exploit them directly.
David Agranovich: I think the most consistent trend we see with these types of operations is adversarial adaptation. What that means is when we do these take downs and when our teams publish reports on the tactic we're seeing or when in open source investigative organizations or civil society groups find these types of networks themselves and disclose what they're doing, these firms adapt quickly to try and get around our detection. It ultimately makes it really important, one, to keep investigating and holding these firms accountable. And two, to essentially follow these threats wherever they may go, tackle this threat as a whole of society problem. That's going to require more comprehensive response if we want to see these types of tools used in a responsible way. But those are, I think, some of the trends we've seen more broadly.
JSR: Mm-hmm (affirmative).
Ali Wyne: And JSR, let me come to you, just in responding to David. So in your own work at Citizen Lab, what kinds of trends are you observing in terms of either targets and/or tactics?
JSR: Well, the scariest trend, and I think we're seeing it more or less wherever we scratch, is zero-click attacks. So it used to be, you could tell people and be Buddhist about it, "Look, detached from attachments. Be mindful of links that can bite." There's a way to do that and in fact, I'm not just pulling that out from nowhere. We worked many years ago with a group of Tibetans who were looking for a campaign of awareness raising to reduce the threat from Chinese threat actors. And so we used this very Buddhist concept of detaching from attachments, stop sending email attachments to each other. Which resulted in a real drop in the efficacy of these Chinese hacking groups as they were trying to find new ways to get people to click on malware. It took a while.
Ali Wyne: Got it.
JSR: But ultimately, per David, we saw adaptation. In general, I think the problem is twofold. One, human behavior is fraught with what we call forever day vulnerabilities, you can't patch. People are vulnerable to certain kinds of things, certain kinds of deception. And so we need to look at platforms and technologies to do part of that work of protecting people and to try to prevent attacks before they reach the level of a victim, having a long, drawn out conversation with somebody. The other thing, of course, that's really concerning, NSO and many others at this point are selling their customers ways to infect devices, whether it's laptops or phones that don't require any user interaction. And obviously this is pretty bad because there's nothing you can do about it as a user, you can keep your device updated but you'll still potentially be susceptible to infections. So you can't really tell people, "Look, here are the three things and if you just do them right, you'll be fine."
The second problem set that it creates is that it makes it a lot harder for investigators like us to find traces of infection quickly. It used to be the case a couple years ago even, that when I would run a big investigation to find cases of, say, NSO targeting, the primary process of investigation would involve finding text messages, finding those infection messages. Even if the forensic traces of the infection were long gone, we could find those. But now we have to do forensics, which means that for defenders and researchers and investigators like us, it creates a much bigger lift in order to get to a place where we understand what's going on with an attack. And that to me is really concerning. People in the government side talk about concerns around encryption causing criminals to go dark. My biggest concern is hacking groups going dark because it's a lot harder to spot when the infections happen. Of course, the harm remains and that's really what we're talking about.
Ali Wyne: I suspect that this will be a phrase that will be new to a lot of listeners or fellow listeners such as myself but when you said, "Detachment from attachment," and I said, "It's such a nice turn of phrase," and I didn't actually realize until you related this anecdote, I didn't realize that it was actually grounded in a professional experience that you had.
JSR: Yeah.
Ali Wyne: But I think it's a compelling mantra for all of us, "Detachment from attachment." I do want to be fair and I want to make sure that we're giving listeners a full picture. And so David, let me come back to you. And so one question I imagine some listeners will have, is that in theory, cyber mercenaries could be used for good? Are there some favorable or at a minimum at least, some legitimate ways that cyber mercenaries can, and/or should be employed? I mean, are there places where they're operating legally? Are there places where they're doing good work? So maybe give us a little bit of a perspective on the other side of the ledger?
David Agranovich: So I'll certainly try but I should preface this by saying, most of my career before I joined Meta was in the National Security space.
Ali Wyne: Right.
David Agranovich: And so I take the security threats that I think some of these firms talk about very seriously. The reality is that law enforcement organizations and governments around the world engage in some of this type of surveillance activity. But what's important is that they do that subject to lawful oversight. And with limitations on their legal authorities, at least in democratic systems. What makes this industry so pernicious and so complicated is, at least as far as we can tell, there's no scalable way to discern the purpose or the legitimacy of their targeting. What's more the use of these third-party services obfuscates who each end customer might be, what they are collecting and how the information is being used against potentially vulnerable groups.
There's essentially just a fundamental lack of accountability or oversight in the surveillance-for-hire industry that makes it hard to determine whether any of this targeting could be considered legitimate. If we wanted to develop a whole-of-society approach to the surveillance-for-hire space and answer your question, we would need to, one, create the oversight and accountability that surveillance tools should receive. Two, hold these companies accountable for how they of tools are used or misused. And three, align through the democratic process on how much these firms should be allowed to do. Until we answer those questions, the surveillance industry will be ripe for abuse.
So one of the interesting things I like to think about is people think that the problem with the mercenary spyware industry is that it sells to autocrats and authoritarians. And of course, it's true. That is part of the problem with the industry because you can guarantee that autocrats and authoritarians are probably going to use this technology in bad ways, in ways that are anti-democratic and problematic. But we now have a couple of year’s experience looking at what happens when big, sprawling democracies from Mexico to India to Poland, get their hands on Pegasus. And what we see is abuses there too.
And so I like to think of the problem set as actually being one, that there are very few customers that you could sell this kind of technology to, that you could sell this really sophisticated surveillance capability to that wouldn't be likely to abuse it. And to me, you have to situate this within the broader problem set, which is authoritarianism is resurgent around the world. And unfortunately, this technology has come time when lots of authoritarians and want-to-be authoritarians are looking for technological ways to get into the heads and phones of their subjects and people around the world. And it's just a very unfortunate thing that these two things are happening at the same time. But I think we can look around the world and say, the mercenary industry is absolutely increasing the speed of authoritarianism in certain country contexts, including in certain democracies that are sliding towards authoritarianism. Hungary would be an example, El Salvador is another, both big Pegasus scandals, both on paper are democratic, but really moving in a concerning direction.
Ali Wyne: I think that context you provided, that geopolitical context is a really helpful backdrop for or an overlay on our broader conversation. Up until now, we've been talking about trends in the digital space and I think you're bringing in this geopolitical element and you put the two together and there's a real prospect of not only resurgent authoritarianism but resurgent authoritarianism imbued with ever more sophisticated technology. So I think that you've given us…You've given us a sense of that digital geopolitical nexus and really a scale of the problem. I want to have you both react just given the scale of this problem, JSR as you've outlined it, I want to get you both to react to a conversation or a snippet of a conversation I recently had with Annalaura Gallo. She's the Head of the Secretariat of the Cybersecurity Tech Accord. And here's what she had to say about cyber mercenaries.
Annalaura Gallo: So the issue here is that we have a private industry that is often legal, that is focused on building very sophisticated, offensive cyber capabilities because these are sometimes even more sophisticated that states can develop. And then they're sold to governments but also other customers. And essentially they're made to exploit peaceful technology products. We know they've also been used by authoritarian governments for surveillance and to crack on political opposition in particular. And we think that all this is extremely concerning because first of all, we are witnessing a growing market. There is a proliferation of these cyber capabilities that could finally end up in the wrong hands. So not only governments but also malicious actors that use these tools to then conduct larger scale cyber attacks. So we don't see how we can just continue in a framework where there is no regulation of these actors because this would just put not only human lives at risk, but also put at risk the entire internet ecosystem.
Ali Wyne: So David, let me come to you. if this nexus of issues is so large, who needs to begin to take responsibility and how? You speak as a representative from a major industry player, Meta. What can the private sector in particular do to mitigate the impact of cyber mercenaries? And maybe if you could just give us a sense of some general industry principles that you'd recommend.
David Agranovich: There's a responsibility, I think, spread across governments, tech companies, civil society and the surveillance industry itself. Governments have the most power to meaningfully constrain the use of these tools. They can hold abusive firms accountable and they can protect the rights of the victims that these firms target. This industry has thrived in a legal gray zone. So the lack of oversight, the lack of regulation has enabled them to grow and appropriate oversight and regulation would go pretty far in curbing some of the worst abuses. Tech companies like ours also need to continue doing what we can to help protect our users from being targeted and to provide people with the tools to strengthen their account security. We need to make it harder for surveillance companies that are part of this industry to find people on our platform and to try and compromise their devices or their accounts.
We routinely investigate these firms. And when we do, we take steps to curb their use of fake accounts, we work to reverse engineer their malware. And then when we do, we share threat indicators or indicators of compromise with other industry players and with the public. So we're also working to help notify the victims when we see them being targeted and that also can help take steps to mitigate the risk. Because these operations are so often cross-platform, they might leverage applications, they might leverage social media websites, they may leverage websites controlled by the attacker. If we see someone being targeted on one of our platforms, we believe that by sending them a notification that we think they are being targeted. And in that notification, giving them specific steps to follow to lock down their cybersecurity presence, hopefully that doesn't just protect them from being targeted on our platform, it also might cut off avenues of attack if a surveillance company is trying to get at them on another way.
Third, civil society also has an important role to play, in particular, in determining what the norms in this space should be. What's acceptable? What's going too far? And how to start creating those expectations more broadly. And then finally, I mentioned, the surveillance industry has responsibilities here. You can see these firms claim, as JSR has noted, that they're just in the business of targeting terrorists and criminals. That's just not what our investigations find.
JSR: I agree with David. I think you have to have consequences and accountability and we are getting there. One of the most interesting things that happened in this space the last couple years was The Commerce Department choosing to list NSO. Now this, of course, limits the ability of American companies to do business with NSO Group. But it had an immediate and radical signaling effect on investors in NSO and the value of NSO's debt plummeted. I think what's interesting about that is that it shows that the industry and the people who are interested in investing in it kind of know how far offsides they are from basic norms and ethics and risks. And the issue is just that for too long there haven't been consequences.
To put this into a bit of a historical perspective. We've been reporting on the mercenary spyware industry for a decade. Things really started changing only in 2019 when WhatsApp and Meta chose to sue NSO Group. That was the beginning of a different phase. Up until that point, NSO had been like the bully on the playground and civil society groups and people working with victims were like the bullied kids. NSO was just a bigger company, more powerful, pouring millions into PR and lobbying.
Suddenly things got a little more complicated for NSO. And then in the last two years, we've seen not only a string of scandals around NSO coming from a place of investigations and research, but also Apple and others joining legal actions against NSO. And then signals from the US Government, both around NSO specifically and more generally towards the mercenary spyware industry. So I think we have a model for what's needed. It looks like legal consequences and accountability for abuses. It looks like serious leaning in by players like Meta, Apple and others using all the tools available, not just technical control measures. It also looks like making sure that governments do their bit and they protect their own citizens and they also make sure that companies that are really the worst offenders, fueling proliferation, are not able to make a big success at it.
And I think we're still learning how some of these things play out but it's been essential to have big platforms leaning in. I see it a little bit like a stool, you have civil society, you have government, and you have the private sector. And we have two legs now, private sector and civil society and that third leg I think is coming. I'm very excited, for example, that the European Union is on the cusp of opening up a Committee of Inquiry into Pegasus and the mercenary spyware industry, more generally, they have a pretty broad mandate. And I just hope to continue to see more governments taking action.
I think when we see that happen, we're also going to see a real shift in the norms of the debate. Because the problem here is not just the tech, it's really the proliferation of that tech. And you solve that problem in the same way that you would solve the proliferation of other kinds of technology that can be used for war and instability. One bug I want to put in the ear of your listeners is this. So we talk about this stuff, as we're talking about the harms that come directly from an attack. So, the harms to an individual or the person that they're in contact with when they get hacked or even to the chilling effect on democracy and civil society somewhere, if all the journalists are being bugged by a greedy autocrat.
But the problem space is actually much larger, as I think some of this conversation has pointed out. If the US Government cannot ensure that its cyber weapons stay outside of the hands of criminal groups, what's the likelihood that mercenary spy war players selling to governments that absolutely cannot get their act together like Togo, for example, is going to prevent these very sophisticated zero-day vulnerabilities and other flaws from being used in a much more vigorous way by cyber criminal groups and others that may get their hands on them? To me, that's one of the biggest concerns because we've been playing fire with this problem since the beginning and mark my words, it's only a matter of time before we see really serious, bad happening here.
Ali Wyne: You mentioned that three-legged stool and you mentioned that we have two prongs of that stool but we need to work on the third one. Obviously a lot of work to do but really grateful that the two of you are involved in that work. John Scott-Railton, Senior Researcher at the Citizen Lab at the University of Toronto's Munk School. David Agranovich, Director of Global Threat Disruption at Meta. Thanks so much for this really terrific conversation.
JSR: Thank you so much.
David Agranovich: Thank you, Ali.
Ali Wyne: That's it for this episode of Patching the System. Next time we'll wrap up this series with a look at the Cybercrime Treaty negotiations underway at the United Nations, and what it could mean for cyberspace globally. You can catch this podcast as a special drop in Ian Bremmer's GZERO World feed anywhere you get your podcast. I'm Ali Wyne, thanks very much for listening.
- Hacked by Pegasus spyware: The human rights lawyer trying to free a princess - GZERO Media ›
- Fooled by cyber criminals: The humanitarian CEO scammed by hackers - GZERO Media ›
- Attacked by ransomware: The hospital network brought to a standstill by cybercriminals - GZERO Media ›
- Podcast: How cyber diplomacy is protecting the world from online threats - GZERO Media ›
- Podcast: Foreign Influence, Cyberspace, and Geopolitics - GZERO Media ›
- Podcast: Cyber mercenaries and the global surveillance-for-hire market - GZERO Media ›
- The devastating impact of cyberattacks and how to protect against them - GZERO Media ›
- How rogue states use cyberattacks to undermine stability - GZERO Media ›
- Why snooping in your private life is big business - GZERO Media ›
How Russian cyberwarfare could impact Ukraine & NATO response
World leaders were on hand Friday for the start of the Munich Security Conference amid increasing tensions over Ukraine. In a Global Stage livestream conversation in Munich, moderator David Sanger of The New York Times discussed the Russian threat and the need to secure cyberspace with the former president of Estonia, Kersti Kaljulaid, NATO Deputy Secretary General Mircea Geoană, Benedikt Franke, chief executive officer of the Munich Security Conference, Anne-Marie Slaughter, CEO of New America, Ian Bremmer, president of Eurasia Group and GZERO Media, and Brad Smith, president and vice chair of Microsoft.
When world leaders gathered for the Munich Security Conference two years ago — against the backdrop of Brexit and President Donald Trump’s “America First” approach to foreign policy — there was a sense of a lack of cohesion, of “Westlessness” among the allies. This year couldn’t look more different.
With Russia saber-rattling over Ukraine and the threat of military escalation looming, the NATO alliance has been given a fresh lease of life with a reason to unify.
“This is a uniquely coherent and cohesive Munich Security Conference, because every NATO ally is completely convinced of the importance of the mission … of the shared values,“ said Bremmer as he described the tone among conference-goers.
Everyone on hand, of course, would prefer a diplomatic solution to the Russia-Ukraine conflict, but the tensions have offered a silver lining to the alliance. “We are rock solid … one of the good sides of this very unfortunate turn of events,” says NATO Deputy Sec. Gen. Mircea Geoană. “I’ve never seen a more diversified and intense consultation from our US allies with the rest of the alliance.”
Politicians, pundits and people on both sides of the Ukrainian border have been wondering the extent to which Russia might use conventional or unconventional means of force to manipulate Kyiv. Bremmer doesn’t think we should expect a “sudden blitzkrieg to Kyiv” so much as a slower assault/encroachment via “recognition of the breakaway territories of the Donbas.”
But clearly there is deep concern about spillover effects in the Baltic states. Kersti Kaljulaid, former president of Estonia, says her country is watching closely. “Kyiv is “not fighting only for Ukraine, but for all of us,” she says. Kaljulaid believes the current crisis poses a threat to European security. “If we are too focused on Ukraine and whether it'll be a slice or a bigger slice, I think we are missing the big picture.”
Recent weeks have also seen an upswing in Russian cyberattacks, and many are wondering how far the Kremlin will go.
Cyber will be a part of the offensive, whatever the scale of escalation, says Geoană. “In all scenarios that Russian leadership would use against Ukraine, cyber is across the board. It’s a part of the non-kinetic operation, part of a destabilization operation, and it’s part of a huge disinformation campaign.”
Such threats are serious, and Geoană noted that the alliance has agreed that a massive cyber attack could trigger Article 5, which lies at the heart of NATO’s collective defense.
So what can NATO do to defend against such attacks? Beyond the defensive knowhow it has developed in recent years, it’s also working with member states such as the US, UK, Denmark, and Estonia to leverage their individual offensive capabilities, to mitigate threats.
Beyond the Russia crisis, cyberwarfare and disinformation pose huge threats to democracies around the world, and greater understanding of those capabilities is sorely needed.
“[Disinformation] is the single biggest threat that Russia poses,” says Anne-Marie Slaughter, CEO of New America. And the information domain generally is full of conflict. It’s used to divide and conquer, with leaders like Putin telling lies so big that “people think there’s something there,” she says.
The US is doing a good job to control the counter narrative in today’s Ukraine crisis, Slaughter adds, but “digital literacy and really training Americans and others to understand that information can be manipulated” is a must. Slaughter notes that such knowledge will be as important as any military strategy in the years ahead.
Kaljulaid agrees. She notes that while people increasingly want to live in free, democratic countries, “a partisan war has broken out precisely in the cyber domain” that is trying to break those democracies.
Beyond the current crisis in Russia — for which everyone on hand at Munich hopes will be resolved through diplomatic means — “[cyber] is the real risk,” says Kaljulaid.
“Live from MSC 2022: Securing Cyberspace,” a Global Stage live conversation on cyber challenges facing governments, companies, and citizens, presented by GZERO Media and Microsoft, was recorded on February 18, 2022, in collaboration with the Munich Security Conference. Sign up for alerts about more upcoming GZERO events.
- To Russia, with love: Why has diplomacy failed? - GZERO Media ›
- Biggest cybersecurity threat to watch in 2022 - GZERO Media ›
- Would you pay a cyber ransom? - GZERO Media ›
- Hackers shut down US pipeline - GZERO Media ›
- "We're identifying new cyber threats and attacks every day" – Microsoft’s Brad Smith - GZERO Media ›
- Cyber warfare & disinformation play key role in Russia Ukraine conflict - GZERO Media ›
- Ukraine war: Has Putin overplayed his hand? - GZERO Media ›
- Global Stage ›
- Podcast: Cyber threats in Ukraine and beyond - GZERO Media ›
- Brad Smith: Russia's war in Ukraine started on Feb 23 in cyberspace - GZERO Media ›
- Podcast: Protecting the Internet of Things - GZERO Media ›
- Ukraine joining NATO "is the only option," says Alina Polyakova - GZERO Media ›
Michael Chertoff: Russia is not a long-term strategic rival for the US
Even as tensions build in Ukraine, Russia is not a long-term strategic rival for the United States. That’s according to former US Department of Homeland Security Secretary Michael Chertoff, who spoke to GZERO World last September. “The danger with Russia in the short-term is recklessness in the neighborhood,” he said. But even though Moscow may not be the same sort of adversary it was during the Cold War, Chertoff sees big challenges for Washington, especially in cybersecurity and hybrid warfare. “The real danger comes when the red lines are murky or fuzzy,” he added.
Watch all of Chertoff's interview on GZERO World with Ian Bremmer: Is America safer since 9/11?
Join us live from the 2022 Munich Security Conference
Friday, February 18 at 11 am ET / 5 pm CET: Watch GZERO Media and Microsoft's live conversation from the 2022 Munich Security Conference.
As crises converge, our speakers will discuss emerging risks at the intersection of technology, policy and security: NATO's role and tools to defend democracy, the US role in global alliances, the rise of cyber threats and the need for cyber norms and stronger defenses.
Participants:
- David E. Sanger, White House and national security correspondent, The New York Times (moderator)
- Ian Bremmer, President and Founder, Eurasia Group and GZERO Media
- Benedikt Franke, Chief Executive Officer, Munich Security Conference
- Mircea Geoană, Deputy Secretary General, NATO
- Kersti Kaljulaid, former President of Estonia
- Anne-Marie Slaughter, CEO, New America
- Brad Smith, President and Vice Chair, Microsoft
Event link: gzeromedia.com/globalstage
This event is being held in collaboration with the Munich Security Conference.
Live from MSC 2022: Securing Cyberspace | Friday, February 18, 2022, 11 am ET / 5 pm CET
Sign up to get email alerts about this and other GZERO events.
NFTs: Hype, mainstream growth - & implications
Marietje Schaake, International Policy Director at Stanford's Cyber Policy Center, Eurasia Group senior advisor and former MEP, discusses trends in big tech, privacy protection and cyberspace:
How wild is the NFT art world? And are there any loopholes behind the trend?
Well, to start with for me, the prices are insanely wild. It looked like a small circle of already wealthy fans are enjoying this new type of speculation. And while I love art, I think there's a world of difference between the Bored Apes and Van Gogh. And I have not quite discovered any appealing cutting-edge creativity in the NFT space. And meanwhile, the loophole are many, there is unauthorized use of images for NFTs, but also risks of money laundering and inflating prices artificially. And the whole hype reminds me a bit of Tulip mania, when in the Netherlands between 1634 and 1637, bulbs were sold for as much as 10 times the annual salary of a skilled artisan.
How will NFTs reshape our ways of living?
Well, since there's so much money exchanging hands, it's understandable that people want in on it. We see social media platforms accommodating NFT sales and their use. For example, Twitter offering the option of an NFT as an avatar, and venture capital investors are jumping on the bandwagon. Nike now looks to offer the sale of virtual shoes in the metaverse, and Bored Ape characters are expected to feature in the Super Bowl halftime show next week. So, it looks like NFTs are going more mainstream, but there's also question of regulation. They are currently not considered as securities in the US and I'm sure that regulators will be catching up to avoid some of the harms from this unregulated market. So ultimately regulation is also going to be very defining in the future of these investments.