Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Podcast: A cybercrime treaty proposed by…Russia?
Listen: Cybercrime is a rapidly growing threat, and one that will require a global effort to combat. But could some of the same measures taken to fight criminals online lead to human rights abuses and a curtailing of freedom?
As the United Nations debates a new and expansive cybercrime treaty first proposed by Russia, we’re examining the details of the plan, how feasible it would be to find consensus, and what potential dangers await if the treaty is misused by authoritarian governments.
Our participants for this fifth and final episode of “Patching the System” are:
- Amy Hogan-Burney, General Manager, Microsoft’s Digital Crimes Unit
- Ali Wyne, Eurasia Group Senior Analyst (Moderator)
This special podcast series from GZERO Media is produced in partnership with Microsoft as part of the award-winning Global Stage series. “Patching the System” highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
Podcast: A cybercrime treaty proposed by…Russia?
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
Amy Hogan-Burney:It's clear from this conversation, clear from the work I do every single day, that there is a greater need for international cooperation because as cyber crime escalates, it's clearly borderless and it clearly requires both public sector and the private sector to work on the problem.
Ali Wyne: Welcome to Patching the System, a special podcast for the Global State series, a partnership between GZERO media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group.
Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment for more than 150 global technology companies dedicated to creating a safer cyber world for all of us.
And today in our final installment of this podcast, we will talk a little bit more about protecting businesses, governments, and citizens from cyber crime well into the future. Now, technology moves fast and certainly in the past two years of the pandemic, we've seen our reliance on it grow by leaps and bounds.
Unfortunately, though, cyber crime is also growing and evolving. And while there are treaties and agreements to support international cooperation in combating cyber crime, there are also ongoing negotiations over a new cyber crime treaty at the UN. Some governments claim that such a comprehensive treaty is necessary while others, as well as industry and civil society groups, raise concerns about how such a treaty might affect rights and freedoms we have come to expect online.
My guest today is Amy Hogan-Burney, general manager of Microsoft's digital crimes unit. Amy, welcome.
Amy Hogan-Burney: Thank you. I'm pleased to be here today.
Ali Wyne: I'm just going to dive right in. So for those who aren't deeply working in this space, this cyber space, as it were, on a daily basis, as you do, why don't we just start with some basic definitions? So how do you define cyber criminals and how do you differentiate them from other kinds of bad actors on the internet? So maybe just beginning with some semantics and definitions for our audience.
Amy Hogan-Burney: Sure. It's a great question. Like really, what is cyber crime? I think the answer is generally that cyber crime is using a computer to illegally access or transmit or manipulate data. And it really is for financial gain, but it can also be for political advantage or for geopolitical reasons. In our work, we can see cyber crime affecting individuals. So for example, online child exploitation. Or we can see it affecting property, so theft of intellectual property or other confidential information from businesses. The thing that both of the examples that I just gave you have in common is a computer was used to commit the criminal activity. And it couldn't have been accomplished without an internet connected device. So I can, and I probably would use a computer to plan my bank robbery, but that bank robbery is not a cyber crime, I think is the best way I can describe it.
Amy Hogan-Burney: I just want to make sure everyone knows I'm not going to rob a bank though.
Ali Wyne: No, we'll take your word because you're busy fighting crime. Speaking of which, you've been fighting cyber crime for a long time. And actually your title, it sounds a bit like a network TV crime drama. I just want to remind folks who are listening. you're general manager of the digital crimes unit at Microsoft. When you started, what did cyber crime look like? And now, how have you seen the space of cyber crime evolve?
Amy Hogan-Burney: So first, this question makes me feel old. It made me sound like I've been around for a long time, but it's okay.
Ali Wyne: You've been fighting for a long time. You've been fighting the good fight for a long time.
Amy Hogan-Burney: But I will say when I first started back about 10 years ago, I was at the FBI. And 10 years ago, cybercrime largely looked like denial of service attacks on banks. And the financial institution and the financial sector is really mature in fighting cybercrime, frankly, because they've had to be. It was really used to distract security teams so that they could steal personal data and banking credentials. And it really, in the 10, 15 years ago damaged the reputations of financial institutions. And they've worked incredibly hard at combating cyber crime for that exact reason. And while we still do see DDoS attacks in the financial sector, I would really say cybercrime at this point has evolved to be a threat to national security. And I say that for, I think two reasons.
First is we're seeing cyber criminals attacking critical infrastructure, so healthcare, public health, information technology, financial services, the energy sector, things like that. And then we're seeing ransomware attacks that are increasingly successful. And those are actually crippling governments and businesses. And we also see the profits from this criminal activity really soaring. So, in business email compromise and ransomware and other things. And so we're seeing just, I think an increase in criminals who are able to get more money and they're broadening their scope.
Ali Wyne: You already have sort of touched on in your answer just now the way in which cybercrime has grown in scale, it's become more sophisticated. It originally was targeting primarily financial institutions, but now it's really evolved. And it's almost, for lack of a better phrase, you could say that it's become professionalized. It really has become an industry in and of itself.
What's driving it? One explanation might just be, look, the Internet's growing, and hackers and coders, they're just getting more sophisticated as the internet grows. But is that explanation sufficient or do you think there's more that's going on to explain why we're seeing this surge of cybercrime?
Amy Hogan-Burney: Yeah, I think the answer is that years ago we used to see most of the technology was inside the United States. Most of the criminals and the sophisticated developers were located inside the US. And the actors were largely kind of working alone. They were a very small, tight knit, technically savvy group. And we just, we don't see that anymore. So what we're really seeing at this point is cyber crime as a service where we don't have these technically savvy people committing criminal activity. You don't need to be a programmer. You don't need to be a developer. We have a cybercrime supply chain that are created by big criminal syndicates. And those criminal syndicates sell their services, allowing anyone to conduct this nefarious activity, whether it's for financial gain or for other nefarious purposes.
We're also seeing cyber criminals located around the world. And unfortunately, in many cases, they're operating in permissive jurisdictions. And we're seeing this malicious infrastructure located around the world. So we can see domains or servers located in more jurisdictions than anywhere than we've ever seen before. So I have a case right now, which I'm have happy to describe for you, but I have servers located in Brazil and Bulgaria and Bangladesh, and I just gave you the Bs because that's just what I happened to look at right before I-
Ali Wyne: I was just about to ask, I said, all three of those countries begin with B.
Amy Hogan-Burney: Those are the ones that begin with B. I actually pulled up the spreadsheet and I was like, oh, I'll just pick the B countries.
Ali Wyne: Oh, got it, got it, got it, got it. So you use this phrase, I mean, it's really evocative. It's actually the first time I've heard this phrase, cyber crime supply chain. When we think of supply chains, I at least in my, my limited way of thinking, when I think of supply chains, I think of these supply chains that provision vital commodities. So supply chains for medicines, supply chains for technological inputs that go into our phones and our computers. But I generally have an either sort of neutral association with the phrase supply chain or a favorable one. But to think about a cyber crime supply chain, that phrase that you'd use, it's incredibly evocative. And I think it gives us a sense of the way in which cyber crime has evolved.
I want to turn a little bit to the other side of the ledger. Presumably as cyber crime escalates, so do efforts to fight cyber crime. So the two sort of go hand in hand. And we'll talk a little bit more later about what governments can do, what they are doing. But in your work specifically at Microsoft, what are some of the ways in which you've adapted to the challenges presented by cyber crime? And realistically, what can Microsoft and other companies do to combat it?
Amy Hogan-Burney: Yeah, it's a great question. And I am incredibly fortunate to lead a team that this is all we focus on, is we just focus on identifying cyber criminal actors. And then we refer those to law enforcement through criminal referrals. And at the same time, we also identify the actual technology that's used by cyber criminals. And then we seek to take that down. And we do that either with cooperation with other third-party providers, or with civil cases. And I think it's super helpful to maybe give an example here.
That ties back to my Brazil, Bulgaria, Bangladesh - which is, I think in October of 2020, we decided to take down Trick Bot, which is a very large bot net. But we had a really specific reason for doing that. We were concerned and had heard from the US government that they were worried about any possible disruptions to the US election, which was in November of 2020. And they were concerned that there could be a potential ransomware attack, not on the actual technology used to cast ballots, but there could be a ransomware attack that attacked voter roles or other things, something that could undermine confidence in the election, even if it didn't tamper with election results. And they really didn't want anything to undermine confidence in it for obvious reasons. And so Trick Bot was one of the largest deliverers of ransomware. So we thought, okay, we will go after the delivery system to make sure that there's no ransomware in this case.
So we brought a civil case in October of 2020 to seize all of the infrastructure used by those cyber criminals. At the same time, the US government took their own action, both inside and outside of the US. And we really did a, I think a very, very good job of getting rid of that infrastructure in October of 2020. But this was a little different I think in a couple different ways, we learned a lot back in October. And the first thing that we learned is that the criminals that ran Trick Bot would sell out Trick Bot as a service. And so not only did we affect their infrastructure, we affected their business model. And they were really unhappy with us. And what they did was is they fought back. And one of the things that really I think surprised me is not only did they fight back, but they went to attack hospitals.
Ali Wyne: Goodness.
Amy Hogan-Burney: They did it during the pandemic. And they really were trying to prove that their service still worked by going after a pretty vulnerable population during a really sensitive time. And that was pretty surprising to us. We partnered with law enforcement and incident responders to really protect the healthcare system, but it was a big learning experience. And then the second thing is that they worked furiously to rebuild because like I said, we had impacted their business model. And we are still taking down Trick Bot infrastructure today. So we've kind of moved into this phase that I call the advanced persistent disruption. And those servers that I told you I looked up today on our dashboard in Brazil and Bulgaria and Bangladesh, they are up and running for Trick Bot today for an operation I started back in October of 2020.
We will work with the provider to take those down. They usually takes about 24 hours. But it just shows how we're at a different place where we have a supply chain that we're constantly combating now versus years ago, when we used to be able to do an operation, take something down, and move on. We're really sophisticated and much larger in scope and scale.
Ali Wyne: So you've mentioned the three Bs, that you just, you looked at the dashboard this morning and you looked at Brazil, Bulgaria, and Bangladesh. Just from those three Bs alone, it seems that when we talk about cyber crime, it seems we basically were talking about an issue that defies borders. A single crime, it can take place across several jurisdictions at once. So when you're dealing with an inherently borderless issue or challenge such as cyber crime, what are some of the tools and international instruments that are currently in place to support cooperation with both law enforcement and industry?
Amy Hogan-Burney: First, I think global cooperation is just essential in this space. So places where we have private sector sharing information about cyber threats, where we can work together to seize that criminal infrastructure, because Microsoft is not a law enforcement agency. So while I can do all kinds of things to protect customers, to track threats, to provide notice, and assist victims, it really is essential that law enforcement is also able to seize infrastructure and to arrest the individuals behind this work. And this really wouldn't be possible I don't think without the Council of Europe has a convention on cyber crime, so the Budapest Convention, which is a longstanding, really valuable tool we have in this area. It's been in place over 20 years. It's been ratified by 66 governments across regions. And it really is a guideline for domestic cyber legislation.
I think the other part that's really important to us is that anytime we have international cooperation, we also have a focus on protecting human rights. And so the Budapest Convention also does that as well. The other part I would add is, and I think we've touched on this throughout this conversation already, is just that evolution. Every time we see the internet evolve and products and services and other things grow and evolve, we unfortunately see criminals grow and evolve as well. And that means that we also need to see the legal framework and the conventions kind of grow and evolve. And we see that with the Budapest Convention. So we have kind of additional protocol that have been added. And I think you'll see more this spring, which will have greater support for international cooperation and more access to evidence so that there can be those prosecutions for cyber criminals that law enforcement do.
Ali Wyne: So coming to the present, you talked about the centrality of international cooperation. And so let's turn to the current cyber crime treaty negotiations that are taking place at the United Nations. So just at a kind of like from a bird's eye view, 30,000 foot level, what are those negotiations about? And what's the ultimate goal of this treaty that being negotiated?
Amy Hogan-Burney: Yeah, it's a great question. It's just really hard for me to tell. On the one hand, it's clear from this conversation, clear from the work I do every single day, that there is a greater need for international cooperation because as cyber crime escalates, it's clearly borderless and it clearly requires both public sector and the private sector to work on the problem. Although I am just not certain that I think that a new treaty will actually increase that cooperation. And I'm a little concerned that it might do more harm than good. One of the things that we're constantly thinking about is yes, we want to be able to go after cyber criminals across jurisdiction.
But at the same time, we want to make sure that we're protecting fundamental freedoms, always respectful of privacy and other things. Also, we're always mindful of authoritarian states that may be using these negotiations to criminalize content or freedom of expression. So I get concerned about any treaty that looks like it may impact journalists or political dissidents or any other vulnerable group. And given that the Budapest Convention has been in place for 20 years, we certainly don't want to see anything that undermines our conflicts with the Budapest Convention.
Ali Wyne: Let's talk about the elephant in the room. turning to the prospect of digital authoritarianism, Russia is obviously a major part of these negotiations that we've just been discussing. They not only sponsored the resolution that began these negotiations, but I think pretty surprisingly, they actually released a draft text for the treaty. Can you give us a little bit of an insight into what their proposal entails?
Amy Hogan-Burney: Sure. First I will say, I think everyone was very surprised to see a draft text. Also it's 70 pages.
Ali Wyne: 70 pages? Wow.
Amy Hogan-Burney: Yeah, so I will spare everyone a complete legal analysis. And I will also say it takes quite a commitment to read.
Ali Wyne: Sure.
Amy Hogan-Burney: The draft, I think, is really focused on individual state interests versus kind of broad global cooperation. It's also very broad. Even at the very beginning of the draft, it starts by saying that "It's designed," and this is a quote, "to promote and strengthen measures aimed at effectively preventing and combating crime and other unlawful acts in the field of ICT," which is information and communications technology. Combating crimes and unlawful acts in the field of ICT is just so incredibly broad and very different than the definition that I gave you at the beginning of what cyber crime really is. And so that broad criminalization, I think, really brings up the risk to freedom of expression, privacy and other things.
And also, I think that vague language of just because a computer is involved, that it is part of this treaty, is also concerning. So it brings us back to my bank robbery example. My bank robbery, should I seek to commit it, certainly shouldn't be covered by a cyber crime treaty that's being negotiated at the UN. And so I think the first real big concern is making sure that we're very clear on definitions. And that we're really focused on cyber crime, such as very clear cyber dependent offenses, and that those that are enabled by computers. And that definition, I think really needs to be clarified in any draft.
Ali Wyne: And that's another important distinction that you posited. Between cyber crime in the narrow way, and the more helpful way that you specified at the outset of our conversation, versus cyber enabled crimes or cyber enabled nefarious activity. But I think positing that distinction is really important. And you mentioned that Russia has a draft text. The definition of cyber crime, or I shouldn't even say a definition, it seems that it's more of just it's such an expansive conception, that it can subsume almost any kind of activity involving a computer. So it's so expansive as to be not only unhelpful, but as you said, potentially catch a lot of people in its net who really don't deserve to be caught up.
Ali Wyne: Let me turn, Amy, to another question. Continuing the discussion of this current cyber crime treaty that's being negotiated at the UN. What is the main argument for a new or more inclusive treaty, and how and why could it potentially fight cyber crime more effectively?
Amy Hogan-Burney: So I think anytime I see conversations being had about well scoped or structured international legal instruments to combat cyber crime, I have to be supportive of that because if we can get to a place where there is widely adopted consensus across governments that would allow for kind of common legal frameworks, obligations across jurisdictions that would allow for effective cooperation, and would provide kind of clear and repeatable access to data, this will be helpful in allowing governments to hold cyber criminals accountable. I think this will be helpful for private sector and the public sector to work together to take down that malicious infrastructure. As long as we are making sure that we kind of have a human centric approach to this, which we really need to make sure that there is a right to redress for individuals in case any rights are violated, and that it doesn't expand authorities in a way that allow law enforcement to trample those fundamental rights that we discussed before.
Ali Wyne: Is the biggest challenge in combating cyber crime just the lack of a common framework? How much of an impact would a new treaty have? And let's say a new treaty that would be up to your standards, a new treaty that you would advocate. Let's say that it were to be endorsed and let's say that it were to come into effect. What are some of the outstanding challenges that even a good treaty that lives up to your standards might not be able to solve as cyber crime continues to evolve?
Amy Hogan-Burney:Yeah, I wish a new treaty would solve all my problems. That would be great. I mean, someday maybe a new treaty will work me out of a job, but I don't think so. So I don't think even a perfect treaty will be the silver bullet. But I think one of the big things we see is for many states is that there really needs to be capacity building. So we've talked a lot about the technology that's involved in this, how sophisticated these criminal infrastructures are, how sophisticated the criminal groups are. And so I think more really needs to be done to support law enforcement agencies to up their technological capability, to better preserve and collect information, to share evidence, to perform digital forensics, and so that they can work with others. And so a treaty is great, a legal framework is great, but we need to have people on the ground that are able to implement that. And that I think is one of the most important things.
Ali Wyne: You've given us a sense of what a good treaty might entail, obviously, recognizing that it's not a silver bullet. But let's push on that a little bit more and let's imagine sort of two scenarios. So in scenario one, a treaty isn't reached. So then if a treaty isn't reached, what are some of the possible outcomes? And then scenario two, let's imagine that a treaty is reached, but it doesn't really reflect the traditional understandings of cyber crime. So maybe walk us through those two scenarios and what you think the implications would be for cyber crime in each of those two scenarios.
Amy Hogan-Burney: So if a treaty isn't reached, I don't think that this exercise is all for naught. So I think the first thing is that having the conversations about international cooperation, the conversations about definitional issues about cyber crime, about protecting fundamental rights in this space, really important. And I also think it does raise the issue of the permissive environment that we sometimes see here, where we have nations that are allowing this illegal activity to be conducted in their jurisdictions, it's an unfortunate reality. But by raising this at the international level and in the UN, it does make us and force us to have this conversation. So even if a treaty isn't reached, we have still had that international cooperative conversation.
If a treaty is reached, I think a lot just depends on how many states adopt the treaty and how it is enforced. I would imagine that there could be a change in the level of privacy and freedom of expression for individuals where the countries put this agreement in force. And I also have concerns, frankly, that it could threaten the vision of a global public internet if there are obligations that are inconsistent with our current international framework.
Ali Wyne: Sure. So let's leave the audience with a little bit of hope. What is the ideal or potential new digital world that could be created by a cyber crime treaty that lives up to your standards? Maybe leave us with a little bit of cautious optimism about some of the possibilities of not a brave new digital world, but a better digital world.
Amy Hogan-Burney: Yeah. I always do worry about being just such a Debbie downer when I do these, because I just talk constantly about the threat. So I do like to try to leave with hope. And I think there's hope first in that we're seeing more victims come forward and more transparency and more conversation here. And if we were to see a new cyber crime treaty that enables law enforcement cooperation across jurisdictions, if it improves access to data while protecting fundamental rights, I just think it could go such a long way towards aligning our efforts internationally across governments and across the private sector, and could go a long way towards protecting those victims that I mentioned that we're really starting to see humanize the aspect of cyber crime that is out there.
Ali Wyne: Amy Hogan-Burney, general manager of Microsoft digital crimes unit. Amy, thank you so much for being with us today. Thank you so much for sharing your insights. It's been a real pleasure.
Amy Hogan-Burney: Thank you for having me.
Ali Wyne: And before we go, let's check in once more with Annalaura Gallo, head of the secretariat of the Tech Accord, about what the Tech Accord and its industry partners hope to see from a cyber crime treaty.
Gallo: Last year before the UN negotiations on the new cyber crime convention started together with the CyberPeace Institute and over 60 organizations, the Tech Accord launched a manifesto on cyber crime, highlighting the principles for a cyber crime convention that safeguards human rights online and a free, open and secure internet. The manifesto includes a set of principles that the negotiations and that states that are conducting the negotiations should keep in mind to ensure that the treaty preserves human rights, but also a free and open internet. And first of all, the new cyber crime treaty should protect targets and victims of cyber crime. We think this is a very important point take into account. It should also ensure that there is effective international cooperation across sectors and between the public and the private sector, but also maintain existing international legal obligations. A new cyber crime treaty should not be an avenue for states to reduce their existing obligations. And of course the manifesto is also a focus on the importance of a multi stakeholder approach to the negotiations. So ensuring that non governmental stakeholders were included in the process and we were happy to see that there has been definitely an inclusive approach so far.
Ali Wyne: That's it for this, the final episode of the Patching the System series. Thank you for joining us and be sure to listen to all of our episodes on topics including cyber mercenaries, supply chain attacks, hybrid warfare, and the internet of things. For deep dive discussions with industry experts from the Cybersecurity Tech Accord on the most pressing challenges online. All episodes are available in Ian Bremmer's GZERO world feed anywhere you get your podcasts. And for more information on the Cybersecurity Tech Accord, and its ongoing efforts to give a voice to the industry on matters of peace and security online, you can check out their website at cybertechaccord.org. I'm Ali Wyne, thanks very much for listening.
Podcast: Cyber Mercenaries and the digital “wild west"
Listen: The concept of mercenaries, hired soldiers and specialists working privately to fight a nation’s battles, is nearly as old as war itself.
In our fourth episode of “Patching the System,” we’re discussing the threat cyber mercenaries pose to individuals, governments, and the private sector. We’ll examine how spyware used to track criminal and terrorist activity around the world has been abused by bad actors in cyber space who are hacking and spying activists, journalists, and even government officials. And we’ll talk about what’s being done to stop it.
Our participants are:
- John Scott-Railton, Senior Researcher at the Citizen Lab at the University of Toronto's Munk School
- David Agranovich, Director of Global Threat Disruption at Meta.
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
Podcast: Cyber Mercenaries and the digital “wild west"
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
John Scott-Railton:You go to a growing number of mercenary spyware companies and surveillance companies that basically offer you NSA-style capabilities in a box and say, "Look, you can pay us a certain amount of money and we're going to send you this stuff." You're seeing basically the direct proliferation, not only of those capabilities, but actually national security information about how to do this kind of hacking moving its way right into the private sector.
David Agranovich: They fill a niche in the market, nation states that lack surveillance capabilities themselves, threat actors who want deniability in their surveillance activities and clients like law firms or litigants who want an edge on their competition. In reality, the industry is putting a thin veneer of professionalism over the same type of abusive activity that we would see from other malicious hacking groups.
INTERVIEW
Ali Wyne: Welcome to Patching the System, a special podcast for the Global Stage series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a Senior Analyst at Eurasia Group.
Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for are all of us. And today we're talking about mercenaries and the concept is almost as old as warfare itself. Hired guns, professional soldiers used in armed conflict. From Germans employed by the Romans in the fourth century to the Routiers of the Middle Ages, to modern day security firms whose fighters have been used in the Iraq and Afghanistan wars, as well as the current war in Ukraine.
But our conversation today is about cyber mercenaries. Now these are financially motivated private actors working in the online world to hack, to attack and to spy on behalf of governments. And in today's world where warfare is increasingly waged in the digital realm, nations use all the tools of their disposal to monitor criminal and terrorist activity online.
Now that includes spyware tools such as Pegasus, a software made by the Israel-based cyber security firm NSO Group that is designed to gain access to smartphone surreptitiously in order to spy on targets. But that same software, which government organizations around the world have used to attract terrorists and criminals has also been used to spy on activists, journalists, even officials with the U.S. State Department.
Here to talk more about the growing world of cyber mercenaries and the tech tools they use and abuse are two top experts in the field, John Scott-Railton or JSR, he's a Senior Researcher at the Citizen Lab at the University of Toronto's Munk School and David Agranovich, who now brings his years of experience in the policy space to his role as Director of Global Threat Disruption at Meta. Welcome to both of you.
JSR: Good to be here.
David Agranovich: Thanks for having us.
Ali Wyne: JSR, I'm going to start with you. So I mentioned in my introductory remarks, this Pegasus software. So tell us a little bit more about that software produced by the NSO Group and how it illustrates the challenges that we're here to talk about today?
JSR: So you can think of Pegasus as something like a service, governments around the world have a strong appetite to gain access to people's devices and to know what they're typing in and chatting about in encrypted ways. And Pegasus is a service to do it. It's a technology for infecting phones remotely, increasingly with zero-click vulnerabilities. That means accessing the phones without any deception required, nobody needs to be tricked into clicking a link or opening an attachment. And then to turn the phone into a virtual spy in the person's pocket. Once a device is infected with Pegasus, it can do everything that the user can do and some things that the user can't. So it can siphon off chats, pictures, contact lists but also remotely enable the microphone and the video camera to turn the phone into a bug in a room, for example. And it can do something else, which is it can take the credentials the user and the victim use to access their cloud accounts and siphon those away too and use those even after the infection is long gone to maintain access to people's clouds.
So you can think of it as a devastating and total access to a person's digital world. NSO, of course, is just one of the many companies that makes this kind of spyware. We've heard a lot about them, in part, because there's just an absolute mountain of abuse cases. Some of them discovered by myself and my colleagues around the world with governments acquiring this technology, perhaps some of the rubric of doing anti-terror or criminal investigation but of course they wind up conducting political espionage, monitoring journalists and others.
Ali Wyne: David, let me come to you. So I think that we should just, before we dive into the deeper conversation, getting a little bit into semantics, a little bit into nomenclature. But let's just start with some basic definitions. When most folks hear the phrase cyber mercenary, some of them might just think it's any kind of bad actor, hacker, others of them might draw parallels to real life, analog kind of mercenaries, so sort of hired soldiers in war. So how do you define the phrase cyber mercenary? How does Meta define the term cyber mercenary and why?
David Agranovich: So maybe just to ground ourselves in definitions a bit. My team at Meta works to coordinate disruption and deterrence of a whole ecosystem of adversarial threat actors online. And so that can include things like info ops, efforts to manipulate and corrupt public debate through fake personas. It can include cyber espionage activity, which is similar to what we're talking about today. Efforts to hack people's phones, email addresses, devices and scaled spamming abuse. When we're talking about cyber mercenary groups, I think of that within the broader cyber espionage space. There are people who are engaged in, as JSR talked about, surveillance, efforts to try and collect info on people to hack their devices, to gain access to private information across the broader internet. These are private companies who are offering surveillance capabilities, which once we're essentially the exclusive remit of nation state intelligence services, to any paying client.
The global surveillance-for-hire industry, for example, targets people across the internet to collect intelligence, to try and manipulate them into revealing information about themselves and ultimately to try and compromise their devices, their accounts, steal their data. They'll often claim that their services and the surveillance ware that they build are intended to focus on criminals, on terrorists. But what our teams have found and groups doing the incredible work like Citizen Lab is that they're regularly targeting journalists, dissidents, critics of authoritarian regimes, the family of opposition figures and human rights activists around the world.
These companies are part of a sprawling industry that provides these intrusive tools and surveillance services, indiscriminately to any customer, regardless of who they're targeting or the human rights abuses that they might enable.
Ali Wyne: What strikes me just in listening to your response is not only how vast, how sprawling this industry is, also how quickly it seems to have risen up. I think that just comparing the state of this industry today versus even 10 years ago or even five years ago. How did it rise up? What are some of the forces that are propelling its growth and give us a sense of it, the origin story and what the current state of play of this industry is today?
David Agranovich: As we see it, these firms grew out of essentially two principal factors. The first impunity and the second a demand for sophisticated surveillance capabilities from less sophisticated actors.
On the first point, companies like NSO or Black Cube or those that we cited in our investigative report from December last year, they wouldn't be able to flagrantly violate the privacy of innocent people if they faced real scrutiny and costs for their actions. But also, to that second point, they fill a niche in the market, nation states that lack surveillance capabilities themselves, threat actors who want deniability in their surveillance activities and clients like law firms or litigants who want an edge on their competition. In reality, the industry is putting a thin veneer of professionalism over the same type of abusive activity that we would see from other malicious hacking groups.
Ali Wyne: So JSR, so I want to come to you now. So David has kind of given us this origin story and he has given us a state of play and has really given us a sense of how sprawling this industry is. So, I guess, for lack of a better phrase, there are jobs here, there are jobs in this space. Who's hiring these cyber mercenaries and for what purposes? Who are they targeting?
JSR: There are a lot of jobs. And I think what's interesting, David pointed out the problem about accountability. And I think that's exactly right. Right now, you have an ecosystem that is largely defined only by what people will pay for, which is a seemingly endless problem set. So who's paying? Well, you have a lot of governments that are looking for this kind of capability that can't develop it endogenously and so go onto the market and look for it. I think even after the Snowden revelations, a lot of governments were like, "Man, I wish I had that stuff. How do we get that?" And the answer is increasingly simple. You go to a growing number of mercenary spyware companies and surveillance companies that basically offer you NSA-style capabilities in a box and say, "Look, you can pay us a certain amount of money and we're going to send you this stuff."
And as David points out, a lot of it is done under the sort of rhetorical flag of convenience of saying, "Well, this is stuff for tracking terrorists and criminals." But actually at this point, we probably have more evidence of abuses than we do confirmed cases where this stuff has been used against criminals. Who's doing the work? A lot of the people who go into this industry are hired by companies with names like NSO, Candiru and others. Many of them come out of government, they come out of either doing their military service in a place like Israel in a unit that focuses on cyber warfare or they come out of places like the CIA, the NSA, Five Eyes and other countries' intelligence services.
Which in itself is really concerning because you're seeing basically the direct proliferation, not only of those capabilities, but actually national security information about how to do this kind of hacking moving its way right into the private sector. And we've seen some really interesting cases in the last year of people who came out of The US intelligence community, for example, doing exactly this kind of thing and then pretty recently getting indicted for it. And so my hope is that we're beginning to see a bit of accountability around this but it's a really concerning problem set in part because the knowledge is specialized, a lot of it relates to countries' national security and it's now flowing into a big, sprawling unregulated marketplace.
Ali Wyne: So David, let's build on what JSR just said. So we have this big, sprawling, it seems increasingly unregulated surveillance ecosystem. It's more democratized, they are more individuals who can participate, the surveillance is getting more sophisticated. So I want to go back to your day job. Honestly, you have a big purview, you head up Global Threat Disruption at Meta, which is responsible for a very wide range of platforms. Which groups do you see in your personal capacity, in your professional capacity at Meta, which groups do you see as being most vulnerable to the actions of cyber mercenaries?
David Agranovich: So I think what's remarkable about these types of cyber mercenary groups, as JSR has noted I think, is just how indiscriminate their targeting is across the internet and how diverse that targeting is across multiple different internet platform. When we released our research report into seven cyber mercenary entities last year, we found that the targets of those networks ranged from journalists and opposition politicians to litigants and lawsuits to democracy activists. That targeting wasn't confined to our platforms either. One of the most concerning trends that we saw across these networks and which Citizen Lab has done significant amount of investigative reporting into is the use of these types of technologies to target journalists, often in countries where press freedoms are at risk and the use of these types of technologies, not just to try and collect open source information about someone, but really trying to break into their private information to hack their devices.
Some of the capabilities that JSR mentioned about the Pegasus malware for example, are incredibly privacy intrusive. Ultimately the problem that I see here is these firms effectively obscure the identity of their clients. Which means anybody, authoritarian regimes, corrupt officials, any client willing to pay the money, can ostensibly turn these types of powerful surveillance tools on anyone that they dislike. And so to answer your question, who's most vulnerable? The reality is that anyone can be, it's why we have to take the activities of these types of firms so seriously.
Ali Wyne: So you both have given us a sense of, again, this really sprawling surveillance ecosystem, the growing range of targets, the growing democratization of this kind of nefarious activity. Can you give us a sense of what tactics you've seen lately that are new? I mean, when I think back to some of the earlier conversations we've had in this podcast series, some of the guests we've had have said, look, there are basic precautionary measures that all of us can take, whether we are a technology expert, such as yourselves or whether we're just a lay consumer.
So use different passwords for different platforms, taking basic steps to safeguard our information. But obviously I think that the pace at which individuals can adapt and the pace at which individuals can take preventative measures, I think is invariably going to be outstripped by the speed with which actors can adapt and find new ways of engaging in cyber mercenary activities. So in your time at Meta, have you seen new tactics being used by these groups in recent years and how are you tracking those and identifying them?
David Agranovich: So maybe just to ground our understanding of how these operations work.
Ali Wyne: Sure.
David Agranovich: How do these tactics fit across the taxonomy? We break these operations down into three phases, what we call The Surveillance Chain. The first phase called reconnaissance is essentially an effort by a threat actor to build a profile on their target through open source information. The second phase which we call engagement is where that threat actor starts to try and build a rapport with the target, with the goal of social engineering them into the final phase, which is exploitation. That final step, which most often happens off of our platform is where the target receives malware or a spearphishing link in an attempt to steal their counter data.
Generally, the way we see the tactics throughout these three phases play out is we'll see these operations use social media early in their targeting to collect information to build a profile in the reconnaissance phase or to try and engage with a target and build a rapport in the engagement phases. And then they'll attempt to divert their target to other platforms like malware riddled websites, for example, where they might try to get a target to download a Trojanized chat application that then delivers malware onto their device or other social media platforms where they'll try and exploit them directly.
David Agranovich: I think the most consistent trend we see with these types of operations is adversarial adaptation. What that means is when we do these take downs and when our teams publish reports on the tactic we're seeing or when in open source investigative organizations or civil society groups find these types of networks themselves and disclose what they're doing, these firms adapt quickly to try and get around our detection. It ultimately makes it really important, one, to keep investigating and holding these firms accountable. And two, to essentially follow these threats wherever they may go, tackle this threat as a whole of society problem. That's going to require more comprehensive response if we want to see these types of tools used in a responsible way. But those are, I think, some of the trends we've seen more broadly.
JSR: Mm-hmm (affirmative).
Ali Wyne: And JSR, let me come to you, just in responding to David. So in your own work at Citizen Lab, what kinds of trends are you observing in terms of either targets and/or tactics?
JSR: Well, the scariest trend, and I think we're seeing it more or less wherever we scratch, is zero-click attacks. So it used to be, you could tell people and be Buddhist about it, "Look, detached from attachments. Be mindful of links that can bite." There's a way to do that and in fact, I'm not just pulling that out from nowhere. We worked many years ago with a group of Tibetans who were looking for a campaign of awareness raising to reduce the threat from Chinese threat actors. And so we used this very Buddhist concept of detaching from attachments, stop sending email attachments to each other. Which resulted in a real drop in the efficacy of these Chinese hacking groups as they were trying to find new ways to get people to click on malware. It took a while.
Ali Wyne: Got it.
JSR: But ultimately, per David, we saw adaptation. In general, I think the problem is twofold. One, human behavior is fraught with what we call forever day vulnerabilities, you can't patch. People are vulnerable to certain kinds of things, certain kinds of deception. And so we need to look at platforms and technologies to do part of that work of protecting people and to try to prevent attacks before they reach the level of a victim, having a long, drawn out conversation with somebody. The other thing, of course, that's really concerning, NSO and many others at this point are selling their customers ways to infect devices, whether it's laptops or phones that don't require any user interaction. And obviously this is pretty bad because there's nothing you can do about it as a user, you can keep your device updated but you'll still potentially be susceptible to infections. So you can't really tell people, "Look, here are the three things and if you just do them right, you'll be fine."
The second problem set that it creates is that it makes it a lot harder for investigators like us to find traces of infection quickly. It used to be the case a couple years ago even, that when I would run a big investigation to find cases of, say, NSO targeting, the primary process of investigation would involve finding text messages, finding those infection messages. Even if the forensic traces of the infection were long gone, we could find those. But now we have to do forensics, which means that for defenders and researchers and investigators like us, it creates a much bigger lift in order to get to a place where we understand what's going on with an attack. And that to me is really concerning. People in the government side talk about concerns around encryption causing criminals to go dark. My biggest concern is hacking groups going dark because it's a lot harder to spot when the infections happen. Of course, the harm remains and that's really what we're talking about.
Ali Wyne: I suspect that this will be a phrase that will be new to a lot of listeners or fellow listeners such as myself but when you said, "Detachment from attachment," and I said, "It's such a nice turn of phrase," and I didn't actually realize until you related this anecdote, I didn't realize that it was actually grounded in a professional experience that you had.
JSR: Yeah.
Ali Wyne: But I think it's a compelling mantra for all of us, "Detachment from attachment." I do want to be fair and I want to make sure that we're giving listeners a full picture. And so David, let me come back to you. And so one question I imagine some listeners will have, is that in theory, cyber mercenaries could be used for good? Are there some favorable or at a minimum at least, some legitimate ways that cyber mercenaries can, and/or should be employed? I mean, are there places where they're operating legally? Are there places where they're doing good work? So maybe give us a little bit of a perspective on the other side of the ledger?
David Agranovich: So I'll certainly try but I should preface this by saying, most of my career before I joined Meta was in the National Security space.
Ali Wyne: Right.
David Agranovich: And so I take the security threats that I think some of these firms talk about very seriously. The reality is that law enforcement organizations and governments around the world engage in some of this type of surveillance activity. But what's important is that they do that subject to lawful oversight. And with limitations on their legal authorities, at least in democratic systems. What makes this industry so pernicious and so complicated is, at least as far as we can tell, there's no scalable way to discern the purpose or the legitimacy of their targeting. What's more the use of these third-party services obfuscates who each end customer might be, what they are collecting and how the information is being used against potentially vulnerable groups.
There's essentially just a fundamental lack of accountability or oversight in the surveillance-for-hire industry that makes it hard to determine whether any of this targeting could be considered legitimate. If we wanted to develop a whole-of-society approach to the surveillance-for-hire space and answer your question, we would need to, one, create the oversight and accountability that surveillance tools should receive. Two, hold these companies accountable for how they of tools are used or misused. And three, align through the democratic process on how much these firms should be allowed to do. Until we answer those questions, the surveillance industry will be ripe for abuse.
So one of the interesting things I like to think about is people think that the problem with the mercenary spyware industry is that it sells to autocrats and authoritarians. And of course, it's true. That is part of the problem with the industry because you can guarantee that autocrats and authoritarians are probably going to use this technology in bad ways, in ways that are anti-democratic and problematic. But we now have a couple of year’s experience looking at what happens when big, sprawling democracies from Mexico to India to Poland, get their hands on Pegasus. And what we see is abuses there too.
And so I like to think of the problem set as actually being one, that there are very few customers that you could sell this kind of technology to, that you could sell this really sophisticated surveillance capability to that wouldn't be likely to abuse it. And to me, you have to situate this within the broader problem set, which is authoritarianism is resurgent around the world. And unfortunately, this technology has come time when lots of authoritarians and want-to-be authoritarians are looking for technological ways to get into the heads and phones of their subjects and people around the world. And it's just a very unfortunate thing that these two things are happening at the same time. But I think we can look around the world and say, the mercenary industry is absolutely increasing the speed of authoritarianism in certain country contexts, including in certain democracies that are sliding towards authoritarianism. Hungary would be an example, El Salvador is another, both big Pegasus scandals, both on paper are democratic, but really moving in a concerning direction.
Ali Wyne: I think that context you provided, that geopolitical context is a really helpful backdrop for or an overlay on our broader conversation. Up until now, we've been talking about trends in the digital space and I think you're bringing in this geopolitical element and you put the two together and there's a real prospect of not only resurgent authoritarianism but resurgent authoritarianism imbued with ever more sophisticated technology. So I think that you've given us…You've given us a sense of that digital geopolitical nexus and really a scale of the problem. I want to have you both react just given the scale of this problem, JSR as you've outlined it, I want to get you both to react to a conversation or a snippet of a conversation I recently had with Annalaura Gallo. She's the Head of the Secretariat of the Cybersecurity Tech Accord. And here's what she had to say about cyber mercenaries.
Annalaura Gallo: So the issue here is that we have a private industry that is often legal, that is focused on building very sophisticated, offensive cyber capabilities because these are sometimes even more sophisticated that states can develop. And then they're sold to governments but also other customers. And essentially they're made to exploit peaceful technology products. We know they've also been used by authoritarian governments for surveillance and to crack on political opposition in particular. And we think that all this is extremely concerning because first of all, we are witnessing a growing market. There is a proliferation of these cyber capabilities that could finally end up in the wrong hands. So not only governments but also malicious actors that use these tools to then conduct larger scale cyber attacks. So we don't see how we can just continue in a framework where there is no regulation of these actors because this would just put not only human lives at risk, but also put at risk the entire internet ecosystem.
Ali Wyne: So David, let me come to you. if this nexus of issues is so large, who needs to begin to take responsibility and how? You speak as a representative from a major industry player, Meta. What can the private sector in particular do to mitigate the impact of cyber mercenaries? And maybe if you could just give us a sense of some general industry principles that you'd recommend.
David Agranovich: There's a responsibility, I think, spread across governments, tech companies, civil society and the surveillance industry itself. Governments have the most power to meaningfully constrain the use of these tools. They can hold abusive firms accountable and they can protect the rights of the victims that these firms target. This industry has thrived in a legal gray zone. So the lack of oversight, the lack of regulation has enabled them to grow and appropriate oversight and regulation would go pretty far in curbing some of the worst abuses. Tech companies like ours also need to continue doing what we can to help protect our users from being targeted and to provide people with the tools to strengthen their account security. We need to make it harder for surveillance companies that are part of this industry to find people on our platform and to try and compromise their devices or their accounts.
We routinely investigate these firms. And when we do, we take steps to curb their use of fake accounts, we work to reverse engineer their malware. And then when we do, we share threat indicators or indicators of compromise with other industry players and with the public. So we're also working to help notify the victims when we see them being targeted and that also can help take steps to mitigate the risk. Because these operations are so often cross-platform, they might leverage applications, they might leverage social media websites, they may leverage websites controlled by the attacker. If we see someone being targeted on one of our platforms, we believe that by sending them a notification that we think they are being targeted. And in that notification, giving them specific steps to follow to lock down their cybersecurity presence, hopefully that doesn't just protect them from being targeted on our platform, it also might cut off avenues of attack if a surveillance company is trying to get at them on another way.
Third, civil society also has an important role to play, in particular, in determining what the norms in this space should be. What's acceptable? What's going too far? And how to start creating those expectations more broadly. And then finally, I mentioned, the surveillance industry has responsibilities here. You can see these firms claim, as JSR has noted, that they're just in the business of targeting terrorists and criminals. That's just not what our investigations find.
JSR: I agree with David. I think you have to have consequences and accountability and we are getting there. One of the most interesting things that happened in this space the last couple years was The Commerce Department choosing to list NSO. Now this, of course, limits the ability of American companies to do business with NSO Group. But it had an immediate and radical signaling effect on investors in NSO and the value of NSO's debt plummeted. I think what's interesting about that is that it shows that the industry and the people who are interested in investing in it kind of know how far offsides they are from basic norms and ethics and risks. And the issue is just that for too long there haven't been consequences.
To put this into a bit of a historical perspective. We've been reporting on the mercenary spyware industry for a decade. Things really started changing only in 2019 when WhatsApp and Meta chose to sue NSO Group. That was the beginning of a different phase. Up until that point, NSO had been like the bully on the playground and civil society groups and people working with victims were like the bullied kids. NSO was just a bigger company, more powerful, pouring millions into PR and lobbying.
Suddenly things got a little more complicated for NSO. And then in the last two years, we've seen not only a string of scandals around NSO coming from a place of investigations and research, but also Apple and others joining legal actions against NSO. And then signals from the US Government, both around NSO specifically and more generally towards the mercenary spyware industry. So I think we have a model for what's needed. It looks like legal consequences and accountability for abuses. It looks like serious leaning in by players like Meta, Apple and others using all the tools available, not just technical control measures. It also looks like making sure that governments do their bit and they protect their own citizens and they also make sure that companies that are really the worst offenders, fueling proliferation, are not able to make a big success at it.
And I think we're still learning how some of these things play out but it's been essential to have big platforms leaning in. I see it a little bit like a stool, you have civil society, you have government, and you have the private sector. And we have two legs now, private sector and civil society and that third leg I think is coming. I'm very excited, for example, that the European Union is on the cusp of opening up a Committee of Inquiry into Pegasus and the mercenary spyware industry, more generally, they have a pretty broad mandate. And I just hope to continue to see more governments taking action.
I think when we see that happen, we're also going to see a real shift in the norms of the debate. Because the problem here is not just the tech, it's really the proliferation of that tech. And you solve that problem in the same way that you would solve the proliferation of other kinds of technology that can be used for war and instability. One bug I want to put in the ear of your listeners is this. So we talk about this stuff, as we're talking about the harms that come directly from an attack. So, the harms to an individual or the person that they're in contact with when they get hacked or even to the chilling effect on democracy and civil society somewhere, if all the journalists are being bugged by a greedy autocrat.
But the problem space is actually much larger, as I think some of this conversation has pointed out. If the US Government cannot ensure that its cyber weapons stay outside of the hands of criminal groups, what's the likelihood that mercenary spy war players selling to governments that absolutely cannot get their act together like Togo, for example, is going to prevent these very sophisticated zero-day vulnerabilities and other flaws from being used in a much more vigorous way by cyber criminal groups and others that may get their hands on them? To me, that's one of the biggest concerns because we've been playing fire with this problem since the beginning and mark my words, it's only a matter of time before we see really serious, bad happening here.
Ali Wyne: You mentioned that three-legged stool and you mentioned that we have two prongs of that stool but we need to work on the third one. Obviously a lot of work to do but really grateful that the two of you are involved in that work. John Scott-Railton, Senior Researcher at the Citizen Lab at the University of Toronto's Munk School. David Agranovich, Director of Global Threat Disruption at Meta. Thanks so much for this really terrific conversation.
JSR: Thank you so much.
David Agranovich: Thank you, Ali.
Ali Wyne: That's it for this episode of Patching the System. Next time we'll wrap up this series with a look at the Cybercrime Treaty negotiations underway at the United Nations, and what it could mean for cyberspace globally. You can catch this podcast as a special drop in Ian Bremmer's GZERO World feed anywhere you get your podcast. I'm Ali Wyne, thanks very much for listening.
- Hacked by Pegasus spyware: The human rights lawyer trying to free a princess - GZERO Media ›
- Fooled by cyber criminals: The humanitarian CEO scammed by hackers - GZERO Media ›
- Attacked by ransomware: The hospital network brought to a standstill by cybercriminals - GZERO Media ›
- Podcast: How cyber diplomacy is protecting the world from online threats - GZERO Media ›
- Podcast: Foreign Influence, Cyberspace, and Geopolitics - GZERO Media ›
- Podcast: Cyber mercenaries and the global surveillance-for-hire market - GZERO Media ›
- The devastating impact of cyberattacks and how to protect against them - GZERO Media ›
- How rogue states use cyberattacks to undermine stability - GZERO Media ›
- Why snooping in your private life is big business - GZERO Media ›
Podcast: Protecting the Internet of Things
In the second episode of
Patching the System, a GZERO podcast produced as part of the Global Stage partnership with Microsoft, we’re examining the proliferation of “smart” devices and the potential risks they pose. We’ll also hear what the Cybersecurity Tech Accord is doing about this important issue.
Our participants are:
- Vince Jesaitis, Senior Director of Government Affairs at Arm
- Beau Woods, Cyber Safety Advocate for I Am the Cavalry
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
Podcast: Protecting the Internet of Things
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
Beau Woods: So for a long time there was a concept that people said, "IoT devices, no one would ever hack an IoT device. That's ridiculous. There's no money in it," right? But now we've seen that adversaries can gain things from IoT devices.
Vince Jasaitis: Government plays an incredibly important role in all of this. The market is certainly trending towards more secure IoT devices, but governments and action from governments can really accelerate that.
Ali Wyne: Welcome to Patching the System, a special podcast for the Global Stage series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout the series we're highlighting the work of the Cybersecurity Tech Accord, a public commitment for more than 150 global technology companies dedicated to creating a safer cyberworld for all of us.
Today we're talking about the Internet of Things, billions of devices around the world that are network-connected, collecting, and sharing data, smart watches, smart refrigerators, even light bulbs you can turn on using your phone. Smart device manufacturing is a big business. It could top half a trillion dollars globally by 2028.
Ali Wyne: But this new world of products also brings new risks both for privacy and security, that are rarely considered as you switch to these so-called smart devices. And some consumers have already felt the real impacts of this, experiencing vulnerabilities, and even attacks, on Internet-of-Things devices, from smart home security cameras to smart coffee machines, to implantable heart devices.
And given these new risks, important discussions need to be had about what responsibilities individuals and organizations have to protect us against these vulnerabilities. I recently spoke to Analaura Gallo, the head of the Secretariat of the Tech Accord, about the dangers that smart devices and appliances can present.
Annalaura Gallo: They can be used to steal data, but also for illicit surveillance. So a house theft or heist could happen because some attackers are checking what's happening within the house, through a smart webcam, for instance.//They could also be used in order to obtain a ransom. This is a bit less likely, but we have seen cases like that. So attackers could hack into IOT devices, slow them down or shut down certain functionalities, and put them back in order only when they receive the ransom. But the most concerning situation is when these devices are used as an attack base to infect more machines. They are a bit of an entry point to then conduct larger scale attacks.
Ali Wyne: So let's dive in now to talk about threats and solutions. Making the environment safer for consumers will take a coordinated effort on the part of industry, experts, and individuals, and here to help us think about better ways to safeguard ourselves today, we have two guests.
First up, Vince Jesaitis, who has deep industry knowledge on these products as a senior director of government affairs at Arm, a semiconductor and software design company. Vince, Welcome.
Vince Jesaitis: Thank you for having me, Ali.
And in addition to that industry perspective, we're excited to have with us Beau Woods, someone who could hack into your own devices if he wanted to. Luckily for us, he's a cybersafety advocate looking to protect us all through his work at I Am The Cavalry, an all-volunteer initiative driving cybersecurity for public safety. Beau, it's great to have you.
Beau Woods: Great to be here, Ali.
Ali Wyne: Vince, let me start with you. So it's estimated that there are going to be more than 40 billion Internet-of-Things devices globally by 2025, so just in three years. So how quickly is this landscape of connected devices changing for consumers?
Vince Jesaitis: I would say it is changing incredibly rapidly, much like everything else over the last two years. The pandemic has accelerated the trend towards the adoption of IoT, both in the workplace, also at home. If you think about some of the solutions that were discussed early in the pandemic on how we could return to normal office life,it involved use of lots of new technology to trace employees, measure temperatures when they're coming in, contact trace, et cetera.
Vince Jesaitis: Similarly, as people were spending more time in the house over the last couple of years, there are a lot of benefits that can come from automating tasks through IoT devices, whether it's using smart light switches and bulbs that can be set to certain schedules to turn on and off, which can save electricity, connected garage door openers, smart doorbells, smart security cameras around your house, they can just provide additional security and comfort.
A good example of this that I could give from my own experience is, a couple of weeks ago we were traveling on a family vacation, and we have a smart thermostat in our house. And I started getting a notification through my smart phone that the furnace in our house had been running, but the temperature wasn't increasing. And so I was able to turn it off remotely.
What had actually happened was, there was a power outage in our neighborhood which had caused some sort of breaker to switch in our HVAC unit. If we didn't have that smart thermostat, the furnace could have been running for two or three days while we were still gone. It could've led to additional parts being worn out, or worst-case scenario, it could have created some sort of electrical damage, or potentially, fire.
So a lot of the benefits of IoT are around convenience, but I think that's one example of how they can help provide real financial and potentially safety benefits as well.
Ali Wyne: Beau, let me turn to you. When we talk about a consumer Internet of Things cyberattack, what do we mean specifically? So tell us, in an Internet of Things cyberattack, what has happened and what's possible.
Beau Woods: Yeah. As we heard, there are huge potential benefits from IoT devices, but there can be some downsides, right? So accidents and adversaries can cause any of the things that you could do with that device to happen without your knowing it, or without your commanding it to do so.
Beau Woods: Software adds complexity to the systems, and complexity adds potential for things to go wrong in novel and unexpected ways. So using the smart thermostat example, adversaries can connect to it, then they could potentially turn on, or off, the thermostat, maybe tamper with some other things to cause a potential safety issue.
Beau Woods: And in fact, with some of the smart thermostats, there was a software update that happened to them - or a configuration change - that was remotely pushed down from the manufacturer, that turned several dozen, or several hundred, off in Chicago in winter. And you can imagine, if you don't have heating in Chicago in winter, it can get pretty brutal.
Ali Wyne: Oh, absolutely.
Beau Woods: Basically, anything that the device can do, that you want it to do, adversaries could potentially trigger that, or accidents due to the complexity and the increased complexity of those systems.
Ali Wyne: So, Vince, I want to come back to you, and I want to build on what Beau was just saying right now. So talking about... There's a kind of inherent duality to these Internet of Things devices. So on the one hand, of course, they can potentially confer a very wide range of benefits, but as Beau was just saying, in parallel to those benefits are also a set of vulnerabilities.
And so I want to probe a little bit more to ask, what is it about these devices that's special, and in particular, why exactly are these IoT devices different from just, say, regular computers? And why is, say, a smart doorbell, or in your case a smart thermostat... Why are those devices harder to secure than the devices that we're used to?
Vince Jesaitis: Maybe I'll start by talking about a traditional personal computer or smart phone. If you think about all the tasks you perform on one of those devices, they're incredibly complex. You could do word processing, video editing, web browsing. Each of those tasks requires a lot of compute power.
On the other hand, most IoT devices do one or two things. Their functionality, and therefore their computing power is generally exceptionally lower than you would find in a traditional personal computer or smart phone.
I also think, in the early stages of the IoT, let's say probably about a decade or so ago, when companies really started adding some level of computing capability to everyday devices, like a thermostat, or a doorbell, there were probably several misconceptions, or maybe miscalculations by the companies that started doing that.
I would say first, and Beau might disagree with this, but a lot of developers of those products probably didn't think a connected light bulb would be that attractive to hackers.
Secondly, I would say there was probably a belief around that time, rightly or wrongly, that security was expensive and time-consuming, and for someone that just wanted to get a product to the market, it was something that they didn't want to think about, or incorporate the cost to build that capability into a device.
Thankfully, I would say that the perspectives on both of these points have changed, and we're seeing a lot of progress in the security features and functionality built in to IoT devices. But historically, unfortunately, that has not necessarily been the case.
Ali Wyne: So, Beau, let me come back to you, and I kind of want to ask you the flip-side of the question that I just put to Vince. From your perspective, as we were introducing you as someone who could hack into our devices if you wanted to, but thankfully, you're on our side and helping us to secure our devices.
So Vince talked a little bit about why it's harder to secure these devices, and I wonder if you could help us understand what kinds of qualities make these devices easier to break into than say, traditional computer products?
Beau Woods: Yeah. I mean, IoT devices are basically computers with extra capability. As Vince mentioned, they have oftentimes limited computing power in them. But there's a, I think, broader concept here, which is that because the consequences are different, adversaries have had to figure out different ways to take advantage of it, right.
So for a long time there was a concept that people said, "IoT devices, no one would ever hack an IoT device. That's ridiculous. There's no money in it," right? But now we've seen that adversaries can gain things from IoT devices, potentially by using something like ransomware, or use them as a gateway to something else, where in a couple of cases, like a fish tank thermostat, I think, has been a vector to attack other computer systems.
It was just basically, they are exposed, and the adversary was able to use it to relay into other systems they cared more about. And because of that, the adversaries that will go after IoT devices will be different. They're not always going to be the kind of criminal element that you might think of. There's a lot of trends of breaking into IoT devices with really common passwords, and then using those IoT devices as kind of a digital army for distributed dial of service attacks, or for other types of attacks that can take down digital infrastructure on the internet.
Beau Woods: Oftentimes, we see that for nuisance, or to extort those websites, to give them money to keep them online, or to staunch the DDoS attack. And you also have a situation where a lot of these devices, they may be built to be put in place for a year or two, but they can last much, much longer.
So the home router that you have, if you're like most people, it's probably something that you've got for 10, 15, or more years in some cases.Think about a fridge. You're probably not going to replace your fridge every two years like you do your phone. So whatever security models we adopt for the IoT devices, and especially early IoT devices didn't have very good security models, you'll be stuck with that as a consumer for many, many years, and often in ways that are directly connected, or directly exposed to the internet so that you can do things like monitor or manage it remotely from your phone, or from another location.
So it's a combination of the limited computing power, limited ability to defend in some ways, the long time-scales, the practices that were in place when that device was put on the market, or put into your home, as well as the fact that most people treat IoT devices as kind of like set-it-and-forget-it. They install it once, and then don't ever mess with the configuration settings, or perform updates as we know that you should in any kind of good maintenance plan.
Ali Wyne: I think clearly, and we talked about this at the outset of our conversation, I think the attraction of these devices is only going to grow. I think that the market for these devices is only going to grow. But I trust that not everyone who's listening today, not everyone has the level of expertise that you and Vince do on sort of the intricacies of these devices.
So for some of the lay consumers who are listening, they buy a connected oven, or a connected dryer, or a connected thermostat... How can we lay consumers, how can we tell if those IoT devices that we purchased, how can we tell if they're safe?
Beau Woods: Yeah. That's a really good question, and it's something that I've worked on quite a bit with a bunch of other people as well, on trying to come up with some kind of simple rubric, or a simple easy-to-use way to tell one device from another device. And not all devices, and not all manufacturers, give you the information you need to be able to make a smart decision at buying time, or to be able to operate it in a secure and safe manner.
But some of the things that you can look for as an average buyer is things like supported lifetimes. So how long does the manufacturer say they'll support the device for? Is it able to get updates, even?
Secondly, you'll want to be able to change the password on the device. So, some devices say you can never change the password. It's hard-coded into the device, or it has a default that hard-coded in, so that when you reset it, it goes back to an insecure state, or a less secure state.
So you can look for devices that can easily change the password, and that have a unique default password, something that it goes back to. It's different from every other device that's sold so that adversaries who figure out the default password can't just compromise hundreds of thousands of these at once.
Third, you can look and see whether or not they have something like a coordinated vulnerability disclosure policy, a way for the device manufacturer to take help from security researchers or others who may find a vulnerability in a device, and report it in good faith to the company so that it can get fixed.
Ali Wyne: Vince, let me turn to you now. So Beau start giving us a sense of some of the solutions that companies are thinking of. Could you tell us a little bit more about what are some of the concrete steps that companies are taking to enhance the security of these consumer devices?
Vince Jesaitis: Absolutely. I would say, as in other areas of computing, there's a recognition that security cannot be an afterthought. It really needs to be addressed at every step in the development process. The Cyber Tech Accord, along with I Am the Cavalry, the World Economic Forum, and Consumers International, in February of this year released a statement calling on companies to adopt a lot of the principles that Beau was talking about in response to the last question, when they're developing IoT products.
Arm, itself, the company I work for, have developed a certification program called, PSA Certified, which measures against these criteria. If you look at the work that's being done by the Department of Commerce, and the European Telecommunications Standards Institute... Beau mentioned what's going on in the UK, or at least he mentioned the sentiment there. The government's acting in that space, as well.
All of this activity is really aligning with the direction that the industry is going, which is to ensure that there is baseline security capabilities in all IoT devices that are coming to market, and really putting market pressure on companies to address security when they're developing these products.
Ali Wyne: Beau, I want to come to you now. So you heard what Vince just said in terms of what Arm is doing, what other companies are doing in this space. What are some additional steps you think companies should be taking to enhance the security of these products, and what gaps do you currently see between where we are in terms of the security of these devices, and where we need to be?
Beau Woods: I'd say there's a huge variability across manufacturers, and across devices from the very, very small organizations are very nascent in their security journey, all the way up to some that are already doing a great job. There's a William Gibson quote, "The future is here. It's just not very evenly distributed." And I'd say that the future of security in IoT devices is here. It's just not very evenly distributed.
And so by getting word out, by helping manufacturers understand what the best security organizations are doing, I believe that we can help them get farther, and more mature on their security curve, as well as reduce overall costs to their manufacturing and design and manufacturing processes, but also the overall economy, right, as cybercrime and other types of cyberattacks have a toll on the economy.
Building security in from the start is a lot cheaper in the long run than trying to put it on after the fact, if you even can. And there are some capabilities that you absolutely can't do after the fact, right. If you don't have a software update mechanism, the way you have to update the software is, buy a new device, and that's not a great option for consumers.
So I think there's a lot of things that manufacturers can do. I think that the statement of support that we put out in February, that Vince mentioned, lays out some of those things and under each of those, there's a technical implementation companion in some of the standards that exist out there.
Ali Wyne: Both you and Vince have gotten at this point already, but the burden doesn't just fall on manufacturers. There also is a certain responsibility that lies with consumers, themselves. What are some additional steps that individuals can take to secure their products? How would you sort of apportion the responsibility for IoT security between companies and individuals?
Beau Woods: Yeah. That's a tough question. There's many different places where individuals might be able to make a different choice, or take a different action. First is at the buying stage, right. Buying things that are more securable, which can often mean they have more features, more capabilities. There's a benefit of going with some of those devices that can do a little bit more sometimes, because they have better security models.
They can read up on product manufacturer websites, talked about the product and the security of the product, and then when they get the device home, and they're operating it, they can set it up securely. A lot of manufacturers now have a wizard or a walk-through to securely set up your device. For those that don't, that could be an extra step that they could take. But at a certain point, there's not a lot that the consumer can actually do. It kind of falls on the manufacturer to provide things that are built in for them, or if they don't have a huge degree of technical sophistication, that's got to come from the manufacturer, and you can't always assume that the buyer's going to be tech-savvy enough to really set something up in a secure way.
Ali Wyne: That makes sense. So Vince, let me come back to you. So we've talked now about the responsibilities of manufacturers. Beau just talked about some of the responsibilities that lie with consumers. And of course, there's at least one other critical player in this space, and that is government.
What role does government play, specifically, with regulations? How should they interact with companies? How should they interact with consumers? But talk with us a little bit more about the role that government regulation, specifically, plays in enhancing the security of consumer IoT devices.
Vince Jesaitis: Yeah. Government plays an incredibly important role in all of this. The market is certainly trending towards more secure IoT devices, but governments and action from governments can really accelerate that. And I would say, historically they've not done that. Really, over the last three to four years you've started to see more governments across the globe take action, and I'll give you a couple of examples of that.
The US government, in particular, has been very active over the last three to four years. Congress passed a law that requires the Department of Commerce to come up with security capabilities for IoT devices that are going to be purchased and used in government facilities. In coordination with that, a entity within the Department of Commerce that does a lot of technical and standards work came up with a baseline cybersecurity IoT device requirement, and it lists a lot of the capabilities that we discussed earlier. But it's a good metric for companies to use, to measure against.
I would also add that as part of the Biden administration's executive order on cybersecurity last year, they required the Department of Commerce to come up with a consumer IoT security label that's easy to understand, so that when customers go into a store and they're shopping for an IoT device, they can pick up a package and one, know that security was addressed in some form or fashion while the device was being developed, but also requires some sort of way for a consumer to go to a website and get more specific information about how that IoT device is secured.
In a similar vein, the UK and the EU are moving in this direction, as well. By and large, I would say they're following the work from the European Telecommunications Standards Institute. And it's not just a US and European movement, either. Singapore has also adopted an IoT labeling program based on that same ETSI work. And South Korea, for instance, as well, has released IoT security guidance.
So there's a lot of similarities across all this activity, but it's a really positive thing to see because it's going to raise the floor and provide the consumers more security, and ultimately more confidence in these products that they're buying.
Ali Wyne: I want to now come back to the Cybersecurity Tech Accord, which is a main focus of this series that we're doing. And so the Cybersecurity Tech Accord, working together with I Am The Cavalry, working with Consumers International... So these are groups representing consumers, hackers, and industry. They brought together businesses, civil, society, and government stakeholders, and came up with a list of five things that all of those stakeholders should be doing to make consumer IoT devices safer.
Beau, could you tell us a little bit about that list, and why is it so important, especially coming from stakeholders who might not always have the same perspective?
Beau Woods: So that list was based on kind of a growing consensus around those five, that those five capabilities are things that should be provided by manufacturers and IoT devices.
We first came together to start working on getting this statement of support out. We wanted to make sure that it was kind of a whole-of-society viewpoint, that it wasn't just one type of group or another that was showing support. We wanted to really show that there is a large and growing body of people and organizations that have worked deeply on these problems, and that have found that there are some common implementable approaches to make more secure securable devices.
And so working with a lot of companies, manufacturers of the devices, manufacturers of the components, retailers, security companies, individual security researchers, people in different governments, we felt that that was a large swath of society, and the consumer groups, as well, a large swath of society that all had come to the same agreement. And so we felt that it was a really good way to just raise the visibility of these five practices, of these five capabilities, so that more manufacturers could begin adopting them, so that more policy makers could have visibility, so that more consumers could look at them and say, "How can I evaluate those five things?"
And I think that publishing this statement of support, we've seen a tremendous amount of follow-on support, other people that want to jump on board, other people who have learned about it because of this effort. And so I think it's been really, really impactful that way.
Ali Wyne: And Vince, some of the recommendations in this list of five, I think are reasonably clear, but I want to ask you, in particular, to zoom in a little bit on vulnerability disclosure policies. Tell us a little bit more about what those vulnerability disclosure policies would entail, and if you can, maybe give us a template for what a good one would look like for a company that's involved in this space.
Vince Jesaitis: Yeah. Vulnerability disclosure policies are something the average person has probably never heard of, or if they have, they probably don't really understand. I can say I didn't until I started working in the technology sector about 12 years ago. But they're incredibly important for cybersecurity. I would say, in short, they're internal policies that companies adopt to address security vulnerabilities or flaws that either the company finds, or others, independent researchers, or people just using the products bring to the company to let them know that there may be a vulnerability or a flaw in it.
I would say, a good vulnerability disclosure policy typically has a couple of key components. First, it really details who in the company is responsible for handling vulnerabilities that are brought to the company, and how they're going to mitigate potential vulnerability when they are made aware of those things.
Secondly, it provides a mechanism for outside individuals to report a potential issue. Oftentimes, users of technology, or researchers, are going to be more likely to find a problem with a IoT device or the software on it, than the company that actually makes it, just because they're spending so much more time with it, or scrutinizing it in a different way.
Lastly, I would say, a key component is for a company to make sure they have a timeline for making that vulnerability public.
And the last one might not be super-intuitive. You might wonder, if you have a vulnerability, why would you make that known publicly. But it's incredibly important so that users of that technology that have adopted either software, or deployed IoT product in their home or in their office, can understand that there's a potential issue with that problem, and can make sure that they're taking appropriate steps to protect themselves, either by finding a update for the software, or taking the device off of their network, or taking whatever other steps would be necessary to protect themselves if there is a vulnerability.
These look different for every company. A software company would respond to a vulnerability in their product in a different way than a hardware company, but those three components would be applicable regardless of where the company sits.
A good example of the differences in vulnerability disclosure policies can be found on the Cyber Tech Accords website. Cyber Tech Accord, I think, has somewhere in the neighborhood of 100 companies that have signed up and nearly all of them have made their vulnerability disclosures public, and are linked to on that website.
Ali Wyne: Thanks, Vince. And Beau, I want to turn to you now. Beyond heeding these vulnerability disclosure policies and paying attention to them, what are some additional ways in which consumers can protect their personal data on IoT devices?
Beau Woods: Yeah. Obviously, a lot of consumers are concerned about personal data potentially leaking out, or someone potentially getting it and tampering with it, depending on what they're looking at. So if you can tamper with someone's settings for their connected thermostat, to use Vince's example, then maybe you could change up the schedule. Or maybe you could pull down some times when the person is home, not home. When you know that they're not home, go rob their house.
But if you want to protect that information, then some things you can do are firstly, before you buy a device, look at what kind of privacy policies are in place. Look at what kind of security policies the company maintains, because it's harder to protect your information when the company, themselves, doesn't do a great job of protecting it, or when part of their business model is to sell it to somebody else, right?
Secondly, initially comes down to personal choice is if there's a cloud-connected component to it, then look at what type of information is being sent out to the cloud. Most companies will have some information. There'll be some disclosures somewhere on the website that talks about the type of information that's sent off to a third-party location for storage or processing. Many devices can function without that cloud component, in which case they're not sending that data out. So you can make a choice. Lose some capabilities by not using the cloud functionality, or give over some of the information, and you can look at what information is sent out to the cloud.
And then finally, when you're getting ready to get rid of the device, a lot of them have some way to set it back to default settings, just wipe all the data off. In some cases you can delete your cloud account, and the company will give you guidance about whether or not they delete your data out of the cloud at that point.
But take those types of steps, and if you can't physically... If you can't do that in software to reset it back to factory settings, then you can take it to a place where they recycle devices. And in a lot of cases, they'll provide, as a service to consumers, to securely recycle that device, to take it apart, to break the chips, and to put that back into the supply chain of materials. So there's a lot of heavy metals, for instance gold, and tin, and copper, in electronics that have value. So some of these recyclers do it for free. They would just want the access to those resources.
Ali Wyne: I like this idea of a secure recycling. I hadn't heard of it, but I think that it's a really instructive and important point for consumers on how they can better protect their personal data.
Is there anything else you want consumers to know? If you were sort of closing out today, what else would you like consumers to know about threats in the IoT world, and what are some steps that consumers can take to advocate for themselves?
Beau Woods: I think for a lot of people listening, they're probably saying, "Oh, yeah. IoT devices, they're not really a big threat to me." Either, "I don't have them," or, "I just haven't seen any headlines about it." But the threat landscape is always changing, and in particular, one constant over time, there are more adversaries, there are more accidents that can happen, there are more models and motivations for adversaries to go after things. So while you may not be impacted today, that will change over the next 5, 10, 15 years, and the devices that you're buying today, many of them will be with you for that time period. So think about what choices you want to make to be more securable in your personal life.
Ali Wyne: And on that note I want to ask one last question to Vince. Vince, any reactions to what Beau just said? Or any closing thoughts to leave us with?
Vince Jesaitis: I think that’s a great response and a great way to leave things. We’ve discussed what steps companies and governments are doing to drive up security, but really consumers need to scrutinize the products that they’re buying and hold the manufacturers who are providing those companies more accountable. And we’ve given examples of several resource but really it is going to depend on what they do in the store. And so I would say continue to ask questions, continue to look for devices that demonstrate security in some way. And make sure when those devices are being set up in your house, you take advantage of all the security features that are built in. So thank you for your time Ali.
Ali Wyne: Well, thank you to both of you for answering so many questions. We will continue to ask questions because this topic, and the topics rather, that we've discussed are only going to grow more important.
Vince Jesaitis, senior director of government affairs at Arm, and Beau Wood, cybersafety advocate at I Am The Cavalry, thank you so much for being here, both of you.
Beau Woods: Thanks for having me.
Vince Jesaitis: Thanks for having me, Ali.
Ali Wyne: And that's it for this episode of Patching the System. You can tune in next time for more on the future of cyberthreats and what we can do about them. You can catch this particular podcast as a special drop in Ian Bremmer's GZERO World anywhere you get your podcasts.
I'm Ali Wyne. Thanks very much for listening.
- Podcast: Patching the System: Cyber threats in Ukraine and beyond ... ›
- Would you pay a cyber ransom? - GZERO Media ›
- The Graphic Truth: Who's Hacking Whom? - GZERO Media ›
- How Russian cyberwarfare could impact Ukraine & NATO response ... ›
- Be more worried about artificial intelligence - GZERO Media ›
- The invisible threat to global peace - GZERO Media ›
Podcast: Cyber threats in Ukraine and beyond
Listen: Cyberattacks in Ukraine are the latest example of how cyberspace is increasingly a theater of conflict around the world. As part of the Global Stage series, a partnership between Microsoft and GZERO Media, the 5-part podcast “Patching the System” will explore the biggest cyber risks and challenges for governments, corporations, and consumers alike. Through the Cybersecurity Tech Accord, a public commitment from more than 150 technology companies, private sector tech leaders are working to create solutions and foster greater cyber resilience.
Our first episode defines the threat landscape and the role tech companies can play in improving “cyber hygiene” and security overall. The conversation features Tom Burt, Microsoft's Corporate Vice President for Customer Security and Trust; and Annalaura Gallo, Head of the Secretariat of The Cybersecurity Tech Accord.
Patching the System is moderated by Ali Wyne, Eurasia Group Senior Analyst.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
- Highlights from our live conversation on cybersecurity challenges ... ›
- "We're identifying new cyber threats and attacks every day ... ›
- How Russian cyberwarfare could impact Ukraine & NATO response ... ›
- A (global) solution for cybercrime - GZERO Media ›
- Brad Smith: Russia's war in Ukraine started on Feb 23 in cyberspace - GZERO Media ›
- Finland “investing in security and stability” with NATO push - GZERO Media ›
- Podcast: How the US will fight cyber wars - GZERO Media ›