Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Podcast: Would the proposed UN Cybercrime Treaty hurt more than it helps?
Listen: As the world of cybercrime continues to expand, it follows suit that more international legal standards should follow. But while many governments around the globe see a need for a cybercrime treaty to set a standard, a current proposal on the table at the United Nations is raising concerns among private companies and nonprofit organizations alike. There are fears it covers too broad a scope of crime and could fail to protect free speech and other human rights across borders while not actually having the intended effect of combatting cybercrime.
In season 2, episode 4 of Patching the System, we focus on the international system of online peace and security. In this episode, we hear about provisions currently included in the proposed Russia-sponsored UN cybercrime treaty as deliberations continue - and why they might cause more problems than they solve.
Our participants are:
- Nick Ashton-Hart, head of delegation to the Cybercrime Convention Negotiations for the Cybersecurity Tech Accord
- Katitza Rodriguez, policy director for global privacy at a civil society organization, the Electronic Frontier Foundation
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
TRANSCRIPT: Would the proposed UN Cybercrime Treaty hurt more than it helps?
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
NICK ASHTON HART: We want to actually see a result that improves the situation for real citizens that actually protects victims of real crimes and that doesn't allow //cybercrime to go unpunished. That's in no one's interest.
KATITZA RODRIGUEZ: By allowing countries to set their own standards of what constitutes a serious crime, the states are opening the door for authoritarian countries to misuse this treaty as a tool for persecution. The treaty needs to be critically examined and revised to ensure that it's truly served its purpose in tackling cybercrimes without undermining human rights.
ALI WYNE: It's difficult to overstate the growing impact of international cybercrime. Many of us either have been victims of criminal activity online or know someone who has been.
Cybercrime is also a big business, it's one of the top 10 risks highlighted in the World Economic Forum's 2023 Global Risk Report, and it's estimated that it could cost a world more than $10 trillion by 2025. Now, global challenges require global cooperation, but negotiations of a new UN Cybercrime Treaty have been complicated by questions around power, free speech and privacy online.
Welcome to Patching the System, a special podcast from the Global Stage series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from more than 150 global technology companies dedicated to creating a safer cyber world for all of us.
In this episode, we'll explore the current draft of what would be the first United Nations Cybercrime Treaty, the tense negotiations behind the scenes, and the stakes that governments and private companies have in those talks.
Last season we spoke about the UN Cybercrime Treaty negotiations when they were still relatively early on in the process. While they had been kicked off by a Russia-sponsored resolution that passed in 2019, there had been delays due to COVID-19.
In 2022, there was no working draft and member states were simply making proposals about what should be included in a cybercrime treaty, what kinds of criminal activity it should address, and what kinds of cooperation it should enable.
Here's Amy Hogan-Burney of the Microsoft Digital Crimes Unit speaking back then:
AMY HOGAN-BURNEY: There is a greater need for international cooperation because as cyber crime escalates, it’s clearly borderless and it clearly requires both public sector and the private sector to work on the problem. Although I am just not certain that I think that a new treaty will actually increase that cooperation. And I’m a little concerned that it might do more harm than good. And so, yes, we want to be able to go after cyber criminals across jurisdiction. But at the same time, we want to make sure that we’re protecting fundamental freedoms, always respectful of privacy and other things. Also, we’re always mindful of authoritarian states that may be using these negotiations to criminalize content or freedom of expression.
Now a lot has happened since then as we've moved from the abstract to the concrete. The chair of the UN Negotiating Committee released a first draft of the potential new cybercrime treaty last June, providing the first glimpse into what could be new international law and highlighting exactly what's at stake. The final draft is expected in November with the diplomatic conference to finalize the text starting in late January 2024.
Joining me are Nick Ashton-Hart, head of delegation to the Cybercrime Convention Negotiations for the Cybersecurity Tech Accord and Katitza Rodriguez, policy director for global privacy at a civil society organization, the Electronic Frontier Foundation. Thanks so much for speaking with me today.
KATITZA RODRIGUEZ: Thank you for inviting us.
NICK ASHTON-HART: It's a pleasure to be here.
ALI WYNE: Let's dive right into the cybercrime treaty. Now, this process started as a UN resolution sponsored by Russia and it was met early on by a lot of opposition from Western democracies, but there were also a lot of member states who genuinely thought that it was necessary to address cybercrime. So give us the broad strokes as to why we might want a cybercrime treaty?
NICK ASHTON-HART: The continuous expansion of cybercrime at an explosive growth rate is clearly a problem and one that the private sector would like to see more effectively addressed because of course, we're on the front lines of addressing it as victims of it. At one level it sounds like an obvious candidate for international action.
In reality, of course, there is the Budapest Convention on cybercrime, which was agreed in 2001. It is not just a convention that European countries can join, any member state can join. If there hadn't been any international convention, then you could see how it would be an obvious thing to work on.
This was controversial from the beginning because there is one and it's widely implemented. I think it's 68 countries, but 120 countries' laws have actually been impacted by the convention. There was also a question because of who was asking for it. This also raised more questions than answers.
KATITZA RODRIGUEZ: For us, civil society, I don't think the treaty is necessary because there are other international treaties, but I do understand why some states are trying to push for this treaty because they feel that their system for law enforcement cooperation is just too slow or not reliable. And they have argued that they have not been able to set up effective mutual legal assistance treaties, but we think the reasons fall short, especially because there are lot of these existing mechanisms include solid human rights safeguards, and when the existing mutual legal assistance treaty for international cooperation does not work well, we believe they can be improved and fixed.
And just let's be real, there are some times when not cooperating is actually the right thing to do, especially when criminal investigations could lead to prosecution of individuals for their political belief, their sexual or protection, gender identity or simply for speaking out of protesting peacefully or insulting the president or the king.
On top of that, this treaty as is stand now, might not even make the cybercrime cooperation process any faster. The negotiators are aiming for mandatory cooperation of almost all crimes on this planet and not just cybercrimes. This could end up bogging down the system even more.
ALI WYNE: Nick, let me just ask you, are there any specific aspects of a new global cybercrime treaty that you think could be genuinely helpful to citizens around the world?
NICK ASHTON-HART: Well, if for one it focused only on cybercrime, that would be the most fundamental issue. The current trajectory would have this convention address all crimes of any kind, which is clearly an ocean boiling exercise and creates many more problems than it solves. There are many developing countries who will say, as Katitza has noted, that they don't receive timely law enforcement cooperation through the present system because if you are not a part of the Budapest Convention, honestly you have to have a bilateral treaty relationship with every country that you want to have law enforcement cooperation with.
And clearly, every country negotiating a mutual legal assistance treaty with 193 others is not a recipe for an international system that's actually effective. That's where an instrument like this can come in and set a basic common set of standards so that all parties feel confident that the convention’s provisions will not be taken advantage of for unacceptable purposes.
ALI WYNE: Katitza, I want to bring you back into the conversation. On balance, what do you think of the draft of the treaty as it stands now as we approach the end of 2023?
KATITZA RODRIGUEZ: Honestly, I'm pretty worried. The last negotiation session in New York made it crystal clear that we're short of time and there is still a lot left undecided, especially on critical issues like defining the treaty scope and ensuring human rights are protected.
The treaty was supposed to tackle cybercrime, but it's morphing into something much broader, a general purpose surveillance tool that could apply to any crime, tech involvement or not, as long as there is digital evidence. We're extremely far from our original goal and opening a can of worms. I agree with Nick when he said that a treaty with a tight focus on just actual cybercrimes topped with solid human right protections could really make a difference. But sadly what we are seeing right now is very far from that.
Many countries are pushing for sweeping surveillance powers, hoping to access real-time location data and communication for a wide array of crimes with minimum legal safeguards, the check and balance to put limits to curb potential abuse of power. This is a big red flag for us.
On the international cooperation front, it's a bit of a free for all the treaty leaves it up to individual countries to set their own standards for privacy and human rights when using these surveillance powers in cross border investigations.
And we know that the standards of some countries are very far from minimal standards, yet every country that signs a treaty is expected to implement these cross-border cooperation powers. And here's where it gets really tricky. This sets a precedent for international cooperation on investigations, even into activities that might be considered criminal in one country but are actually forms of free expression. This includes laws against so-called fake news, peaceful protests, blasphemy, or expressing non-conforming sexual orientation or gender identity. These are matters of human rights.
ALI WYNE: Nick, from your perspective, what are the biggest concerns for industry right now with the text, with the negotiations as they're ongoing? What are the biggest concerns for industry and is there any major provision that you think is missing right now from the current text?
NICK ASHTON-HART: Firstly, I will say that industry actually agrees with everything you just heard from EFF. And that's one of the most striking things about this negotiation, is in more than 25 years of working in multilateral policy, I have never seen all NGOs saying the same thing to the extent that is the case in this negotiation. Across the board, we have the same concerns. We may emphasize some more than others or put a different level of emphasis on certain things, but we all agree comprehensively, I think, about the problems.
One thing that's very striking is this is a convention which is fundamentally about the sharing of personal information about real people between countries. There is no transparency at all at any point. In fact, the convention repeatedly says that all of these transfers of information should be kept secret.
This is the reality that they are talking about agreeing to, is a convention where countries globally share the personal information of citizens with no transparency at all. Ask yourself if that is a situation which isn't likely to be abused, because I think we know the answer. It's the old joke about you know who somebody is if you put them in a room and turn the lights off. Well, the lights are off and the light switch doesn't exist in this treaty.
And so that, to us, is simply invidious in 2024 that you would see that bearing the UN logo - it would be outrageous. And that's just the starting place. There's also provisions that would allow one country to ask another to seize the person of say a tech worker who is on holiday, or a government worker who is traveling that has access to passwords of secure systems, to seize that person and demand that that person turn over those codes with no reference back to their employer.
As Katitza has said, it also allows for countries to ask others to provide the location data and communication metadata about where a person is in real time along with real time access to their computer information. This is clearly subject to abuse, and we brought this up with some delegations and they said, "Well, but countries do this already, so do we have to worry about it?"
I just found that an astonishing level of cynicism: the fact that people abuse international law isn't an argument for trying to limit their ability to do it in this context. We have a fundamental disconnect where we're asking to trust all countries in the world to operate in the dark, in secret, forever and that that will work out well for human rights.
ALI WYNE: Katitza, let me bring you back into the conversation. You heard Nick's assessment. I'd like to ask you to react to that assessment and also to follow up with you, do you think that there are any critical provisions that need to be added to the current text of the draft treaty?
KATITZA RODRIGUEZ: Well, I agree on many of the points that Nick made. One, keeping a sharp focus on so-called cybercrimes, is not only crucial for protecting human rights, our point of view, but it's also key to making this whole cooperation work. We have got countries left and right pointing out the flaws in the current international cooperation mechanisms, saying they are too flawed, too complex. And yet here we are heading towards a treaty that could cover a limitless list of crimes. That's not just missing the point, it's setting us up for even more complexity when the goal should be working better together, easier to tackle this very serious crimes like ransomware attacks that we have seen everywhere lately.
There is a few things that are also very problematic that are more into the details. One is one that Nick mentioned, this provision that could be used to coerce individual engineers, people who have knowledge to be able to access systems, to compel them to bypass their own security measures or the measures of their own employees, without the company actually knowing and putting the engineer into trouble because it won't be able to tell their employer that they are working on behalf of the law enforcement. I think it's really Draconian, these provisions, and it's also very bad for security, for encryption, for keeping us more safe.
But there's another provision that is also very problematic for us. It's the one that on international cooperation too, when it mentions that states should share, "Items or data required for analysis of investigations." The way it's phrased, it is very vague and leaves room for a state's ability to share entire databases or artificial intelligence trainings data to be shared. This could include biometrics data, data that is very sensitive and it's a human rights minefield here. We have seen how biometric data, face and voice recognition can be used against protestors, minorities, journalists, and migrants in certain countries. This treaty shouldn't become a tool that facilitates such abuses on an international scale.
And we also know that Interpol, in the mix too, is developing this massive predictive analytic system fed by all sorts of data, but it will be also with information data provided by member states. The issue with predictive policing is that it's often pitched as unbiased since it's based on data and not personal data, but we know that's far from the truth. It's bound to disproportionately affect Black and other over-policed communities. The data feeds into these systems comes from a racially biased criminal punishment systems and arrests in Black neighborhoods are disproportionately high. Even without explicit racial information, the data is tainted.
One other one:Human rights safeguards in the treaty as Nick says, they're in secret and the negotiation, no transparency, we fully agreed on that, but they are very weak.
As it stands, the main human rights safeguards in the treaty don't even apply to the international co-operation chapter, which is a huge gap. It defers to national law, whatever national law says, and as I said before, for one country this is good and for others it's bad and that's really problematic.
ALI WYNE: Nick, in terms of the private sector and in terms of technology companies, what are the practical concerns when it comes to potential misuses or abuses of the treaty from the perspective specifically of the Cybersecurity Tech Accord?
NICK ASHTON-HART: In the list of criminal acts in the convention, at the present time, none of them actually require criminal intent, but that is not actually the case at the moment. The criminal acts are only defined as "Acts done intentionally without right." This opens the door for all kinds of abuses. For example, security researchers often attempt to break into systems in order to find defects that they can then notify the vendors of, so these can be fixed. This is a fundamentally important activity for the security of all systems globally. They are intentionally breaking into the system but not for a negative purpose, for an entirely positive one.
But the convention does not recognize how important it is not to criminalize security researchers. The Budapest Convention, by contrast, actually does this. It has very extensive notes on the implementation of the convention, which are a part of the ratification process, meaning countries should not only implement the exact text of the convention, but they should do so in a rule of law-based environment that does, among other things, protect security researchers.
We have consistently said to the member states, "You need to make clear that criminal intent is the standard." The irony here is this is actually not complicated because this is a fundamental concept of criminal law called mens rea, which says that with the exception of certain crimes like murder, for someone to be convicted, you have to find that they had criminal intent.
Without that, you have the security researchers’ problem. You also have the issue that whistleblowers are routinely providing information that they're providing without authorization, for example, to journalists or also to watchdog agencies of government. Those people would also fall foul of the convention as its currently written, as would journalists' sources, depending on the legal environment in which they're implemented. Like civil society, we have consistently pointed out these glaring omissions and yet no country including the developed Western countries that you would expect would seize upon this, none of them have moved to include protections for any of these situations.
I have to say that's one of the most disappointing things about this negotiation is so far most of the Western democracies are not acting to prevent abuses of this convention and they are resisting any efforts from all of us in civil society and the private sector urging them to take action and they're refusing to do so. There are two notable exceptions which is New Zealand and Canada, but the rest, frankly, are not very helpful.
Some of the other issues that we have is that it should be much clearer that if there's a conflict of law problem where a country asks for cooperation of a provider and the provider says to them, "Look, if we provide this information to you, it's coming from another jurisdiction and it would cause us to break the law in that jurisdiction." We have repeatedly said to the member states, "You need to provide for this situation because it happens routinely today and in such an instance it's up to the two cooperating states to work out between themselves how that data can be provided in a way that does not require the provider to break the law."
If you want to see more effective cooperation and more expeditious cooperation, you would want more safeguards, as Katitza has mentioned. There's a direct connection between how quickly cooperation requests go through and the level of safeguards and comfort with the legal system of the requesting and requested states.
Where a request goes through quickly, it's because the states both see that their legal systems are broadly compatible in terms of rights and the treatment of accused persons and appeals and the like. And so they not only see that the crimes are the same, called dual criminality, but that also the accused will be treated in a way that's broadly compatible with the home jurisdiction. And so there's a natural logic to saying, "Since we know this is the case, we should provide for this in here and ensure robust safeguards because that will produce the cooperation that everyone wants." Unfortunately, the opposite is the case. The cooperation elements continue to be weakened by poor safeguards.
ALI WYNE: I think that both of you have made clear that the stakes are very high for whether this treaty comes to pass, what will the final text be? What will the final provisions be? But just to put a fine point on it, are there concerns that this treaty could also set a precedent for future cybercrime legislation across jurisdictions? I can imagine this treaty serving as a north star in places that don't already have cybercrime laws in place, so Katitza, let me begin with you.
KATITZA RODRIGUEZ: Yes, your are concerns and indeed very valid and very pressing. By setting a precedent where broad intrusive surveillance tools are made available for an extensive range of crimes, we risk normalizing a global landscape where human rights are secondary to state surveillance and control. Law enforcement needs ensured access to data, but the check and balances and the safeguards is to ensure that we can differentiate between the good cops and the bad cops. The treaty provides a framework that could empower states to use the guise of cybercrime prevention to clamp down on activities that are protected under human right law.
And I think that this broad approach not only diverts valuable resources and attention away for tackling genuine cybercrimes, but also offers – and here to answer your question - an example for future legislation that could facilitate this repressive state's practice. It sends a message that this is acceptable to use invasive surveillance tools to gather evidence for any crime deemed serious by a particular country irrespective of the human rights implications. And that's wrong.
By allowing countries to set their own standards of what constitutes a serious crime, the states are opening the door for authoritarian countries to misuse this treaty as a tool for persecution. The treaty needs to be critically examined and revised to ensure that it's truly served its purpose in tackling cybercrimes without undermining human rights. The stakes are high and I know it's difficult, but we're talking about the UN and we're talking about the UN charter. The international community must work together to ensure that they can protect security and also fundamental rights.
NICK ASHTON-HART: I think Katitza has hit the nail on the head, and there's one particular element I'd like to add to this is something like 40% of the world's countries at the moment either do not have cybercrime legislation or are revising quite old cybercrime legislation. They are coming to this convention, they've told us this, they've coming to this convention because they believe this can be the forcing mechanism, the template that they can use in order to ensure that they get the cooperation that they're interested in.
So the normative impact of this convention would be far greater than in a different situation, for example, where there was already a substantial level of legislation globally and it had been in place in most countries for a long enough period for them to have a good baseline of experience in what actually works in prosecuting cybercrimes and what doesn't.
But we're not in that situation. We're pretty much in the opposite situation and so this convention will have a disproportionately high impact on legislation in many countries because with the technical assistance that will come with it, it'll be the template that is used. Knowing that that is the case, we should be even more conservative in what we ask this convention to do and even more careful to ensure that what we do will actually help prosecute real cybercrimes and not facilitate cooperation on other crimes.
This makes things even more concerning for the private sector because of this. We want to actually see a result that improves the situation for real citizens that actually protects victims of real crimes and that doesn't allow as is unfortunately the case here, even large-scale cybercrime to go unpunished. That's in no one's interest, but this convention will not actually help with that. At this point we would have to see it as net harmful to that objective, which is supposed to be a core objective.
ALI WYNE: We've discussed quite extensively the need for international agreements when it comes to cybercrime. We've also mentioned some of the concerns about the current deal on the table. Nick, what would you need to see to mitigate some of the concerns that you have about the current deal on the table?
NICK ASHTON-HART: The convention should be limited to the offenses that it contains. Its provisions should not be available for any other criminal activity or cooperation. That would be the starting place. The second thing would be to inscribe crimes that are self-evidently criminal through providing for mens rea in all the articles to avoid the problems with whistleblowers, and journalists and security researchers. There should be a separate commitment that the provisions of this convention do not apply to actors acting in good faith to secure systems such as those that have been described. There must be, we think, transparency. There is no excuse for a user not to be notified at the point that the offense for which their data was accessed has been adjudicated or the prosecution abandoned and that should be explicitly provided.
People have a right to know what governments are doing with their personal information. We think it should be much clearer what dual criminality is. It should be very straightforward that without dual criminality, no cooperation under the convention will take place so that requests go through more quickly. It's much more clear that it is basically the same crime in all the cooperating jurisdictions. I would say those were the most important.
ALI WYNE: Katitza, you get the last word. What would you need to see to mitigate some of the concerns that you've expressed in our conversation about the current draft text on the table?
KATITZA RODRIGUEZ: First of all, we need to rethink how we handle refusals for cross border investigations. The treaty is just too narrow here, offering barely any room to say no. Even when the request to cooperate violates, or is inconsistent with human rights law. We need to make dual criminality a must to invoke the international cooperation powers, as Nick says. This dual criminality principle is a safeguard. That means that if it is not a crime in both countries involved, the treaty shouldn't allow for any assistance. You also need clear mandatory human rights safeguards in all international cooperation, that are robust - with notification, transparency, oversight mechanisms. Countries need to actively think about potential human regulations before using these powers.
It also helps if we only allow cooperation for genuine cybercrimes like real core cybercrimes, and not just any crime involving a computer, or that is generating electronic evidence, which today even the electronic toaster could leave digital evidence.
I just want to conclude by saying actual cybercrime investigations are often highly sophisticated and there's a case to be made for an international effort focused on investigating those crimes, but including every crime under the sun in its scope and sorry, it's really a big problem.
This treaty fails to create that focus. The second thing it also fails to provide these safeguards for security researchers, which Nick explained. We’re fully agreed on that. Security researchers are the ones who make our systems safe. Criminalizing what they do and not providing the effective, safeguards, it really contradicts the core aim of the treaty, which is actually to make us more secure to fight cybercrime. So we need a treaty that it's narrow on the scope and protects human rights. The end result however, is a cybercrime treaty that may well do more to undermine cybersecurity than to help it.
ALI WYNE: A really thought-provoking note on a which to close. Nick Ashton-Hart, head of delegation to the cybercrime convention negotiations for the Cybersecurity Tech Accord and Katitza Rodriguez, policy director for global privacy at A Civil Society Organization, the Electronic Frontier Foundation. Nick, Katitza, thank you so much for speaking with me today.
NICK ASHTON-HART: Thanks very much. It's been a pleasure.
KATITZA RODRIGUEZ: Thanks for having me on. Muchas gracias. It was a pleasure.
ALI WYNE: That's it for this episode of Patching the System. Catch all of the episodes from this season, exploring topics such as cyber mercenaries and foreign influence operations by following Ian Bremmer's GZERO World feed anywhere you get your podcasts. I'm Ali Wyne, thanks for listening.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
- Podcast: Foreign influence, cyberspace, and geopolitics ›
- Podcast: Cyber mercenaries and the global surveillance-for-hire market ›
- Podcast: How cyber diplomacy is protecting the world from online threats ›
- Attacked by ransomware: The hospital network brought to a standstill by cybercriminals ›
- Hacked by Pegasus spyware: The human rights lawyer trying to free a princess ›
- Podcast: Can governments protect us from dangerous software bugs? - GZERO Media ›
Podcast: Foreign influence, cyberspace, and geopolitics
Listen: Thanks to advancing technology like artificial intelligence and deep fakes, governments can increasingly use the online world to spread misinformation and influence foreign citizens and governments - as well as citizens at home. At the same time, governments and private companies are working hard to detect these campaigns and protect against them while upholding ideals like free speech and privacy.
In season 2, episode 3 of Patching the System, we're focusing on the international system of bringing peace and security online. In this episode, we look at the world of foreign influence operations and how policymakers are adapting.
Our participants are:
- Teija Tiilikainen, Director of the European Center of Excellence for Countering Hybrid Threats
- Clint Watts, General Manager of the Microsoft Threat Analysis Center
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
TRANSCRIPT: Foreign Influence, Cyberspace, and Geopolitics
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
Teija Tiilikainen: What the malign actors are striving at is that they would like to see us starting to compromise our values, so question our own values. We should make sure that we are not going to the direction where the malign actors would want to steer us.
Clint Watts: From a technical perspective, influence operations are detected by people. Our work of detecting malign influence operations is really about a human problem powered by technology.
Ali Wyne: When people first heard this clip that circulated on social media, more than a few were confused, even shocked.
AI VIDEO: People might be surprised to hear me say this, but I actually like Ron DeSantis a lot. Yeah, I know. I'd say he's just the kind of guy this country needs, and I really mean that.
Ali Wyne: That was not Hillary Clinton. It was an AI-generated deepfake video, but it sounded so realistic that Reuters actually investigated it to prove that it was bogus. Could governments use techniques such as this one to spread false narratives in adversary nations or to influence the outcomes of elections? The answer is yes, and they already do. In fact, in a growing digital ecosystem, there are a wide range of ways in which governments can manipulate the information environment to push particular narratives.
Welcome to Patching the System, a special podcast from the Global Stage series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us. Today, we're looking at the growing threat of foreign influence operations, state-led efforts to misinform or distort information online.
Joining me now are Teija Tiilikainen, Director of the European Center of Excellence for Countering Hybrid Threats, and Clint Watts, General Manager of the Microsoft Threat Analysis Center. Teija, Clint, welcome to you both.
Teija Tiilikainen:Thank you.
Clint Watts: Thanks for having me.
Ali Wyne: Before we dive into the substance of our conversation, I want to give folks an overview of how both of your organizations fit into this broader landscape. So Clint, let me turn to you first. Could you quickly explain what it is that Microsoft's Threat Analysis Center does, what its purpose is, what your role is there, and how does its approach now differ from what Microsoft has done in the past to highlight threat actors?
Clint Watts: Our mission is to detect, assess and disrupt malign influence operations that affect Microsoft, its customers and democracies. And it's quite a bit different from really how Microsoft has handled it up until we joined. We were originally a group called Miburo and we had worked in disrupting malign influence operations until we were acquired by Microsoft about 15 months ago.
And the idea behind it is we can connect what's happening in the information environment with what's happening in cyberspace and really start to help improve the information integrity and ecosystem when it comes to authoritarian nations that are trying to do malign influence attacks. Every day, we're tracking Russia, Iran, and China worldwide in 13 languages in terms of the influence operations they do. That's a combination of websites, social media hack and leak operations is a particular specialty of ours where we work with the Microsoft Threat Intelligence Center. They see the cyberattacks and we see the alleged leaks or influence campaigns on social media, and we can put those together to do attribution about what different authoritarian countries are doing or trying to do to democracies worldwide.
Ali Wyne: Teija, I want to pose the same question to you because today might be the first that some of our listeners are hearing of your organization. What is the role of the European Center of Excellence for Countering Hybrid Threats?
Teija Tiilikainen: So this center of excellence is an intergovernmental body that has been established originally six years ago by nine governments from various EU and NATO countries, but today covers 35 governments. So we cover 35 countries, EU and NATO allies, all, plus that we cooperate closely with the European Union and NATO. Our task is more strategic, so we try to analyze the broad hybrid threat activity and with hybrid threats, we are referring to unconventional threat forms, election interference, attacks against critical infrastructures, manipulation of the information space, cyber and all of that. We create capacity, we create knowledge, information about these things and try to provide recommendations and share best practices among our governments about how to counter these threats, how to protect our societies.
Ali Wyne: We're going to be talking about foreign influence operations throughout our conversations, but just first, let's discuss hybrid conflicts such as what we're seeing and what we have seen so far in Ukraine. And I'm wondering how digital tactics and conflicts have evolved. Bring us up to speed as to where we are now when it comes to these digital tactics and conflicts and how quickly we've gotten there.
Teija Tiilikainen: So it's easier to start with our environment where the societies rely more and more on digital solutions, information technology that is a part of our societies. We built that to the benefit of our democracies and their economies. But in this deep conflict where we are, conflict that has many dimensions, one of them being the one between democracies and authoritarian states, that changes the role of our digital solutions and all of a sudden we see how they have started to be exploited against our security and stability. So it is about our reliance on critical infrastructures, there we have digital solutions, we have the whole information space, the cyber systems.
So this is really a new - strengthening new dimension in the conflicts worldwide where we talk more and more about the need to protect our vulnerabilities and the vulnerabilities more and more take place in the digital space, in the digital solutions. So this is a very kind of technological instrument, more and more requiring advanced solutions from our societies, from us, very different understanding about threats and risks. If we compare that with the kind of more traditional one where armed attacks, military operations used to form the threat number one, and now it is about the resilience of our critical infrastructures, it is about the security and safety of our information solutions. So the world looks very different and ongoing conflicts do that as well.
Ali Wyne: And that word, Teija, that you use, resilience, I suspect that we're going to be revisiting that word and that idea quite a bit throughout our conversation. Clint, let me turn to you now and ask how foreign influence operations in your experience, how have they evolved online? How are they conducted differently today? Are there generic approaches or do different countries execute these operations differently?
Clint Watts: So to set the scene for where this started, I think the point to look at is the Arab Spring. When it came to influence operations, the Arab Spring, Anonymous, Occupy Wall Street, lots of different political movements, they all occurred roughly at the same time. We often forget that. That's because social media allowed people to come together around ideas, organize, mobilize, participate in different activities. That was very significant, I think, for nation states, but one in particular, which was Russia, which was a little bit thrown off by, let's say, the Arab Spring and what happened in Egypt, for example. But at the same point intrigued about what would the ability to be able to go into a democracy and infiltrate them in the online environment and then start to pit them against each other.
That was the entire idea of their Cold War strategy known as active measures, which was to go into any nation, particularly in the United States or any set of alliances, infiltrate those audiences and win through the force of politics rather than the politics of force. You can't win on the battlefield, but you can win in their political systems and that was very, very difficult to do in the analog era. Fast-forward to the social media age, which we saw the Russians do, was take that same approach with overt media, fringe media or semi-covert websites that look like they came from the country that was being targeted. And then combine that with covert troll accounts on social media that look like and talk like the target audience and then add the layer that they could do that no one else had really put together, which was cyberattacks, stealing people's information and timing the leaks of information to drive people's perceptions.
That is what really started about 10 years ago, and our team picked up on it very early in January 2014 around the conflict in Syria. They had already been doing it in the conflict in Ukraine 10 years ago, and then we watched it move towards the elections: Brexit first, the U.S. election, then the French and German elections. This is that 2015, '16, '17 period.
What's evolved since is everyone recognizing the power of information and how to use it and authoritarians looking to grab onto that power. So Iran has been quite prolific in it. They have some limitations, they have resource issues, but they still do some pretty complex information attacks on audiences. And now China is the real game changer. We just released our first East Asia report where we diagrammed what we saw as an incredible scaling of operations and centralization of it.
So looking at how to defend, I think it's remarkable that, like Teija mentioned, in Europe, a lot of countries that are actually quite small, Lithuania, for example, has been able to mobilize their citizens in a very organized way to help as part of the state's defense, come together with a strategy, network people to spot disinformation and refute it if it was coming from Russia.
In other parts of the world though, it's been much, much tougher, particularly even the United States where we've seen the Russians and other countries now infiltrate into audiences and you're trying to figure out how to build a coherent system around defending when you have a plurality of views and identities and different politics. It's actually somewhat more difficult, I think, the larger a country is to defend, particularly with democracies.
And then the other thing is you've seen the resilience and rebirth of alliances and not just on battlefields like NATO, but you see NATO, the EU, the Hybrid CoE, you see these groups organizing together to come through with definitions around terminology, what is a harm to democracy, and then how best to combat it. So it's been an interesting transition I would say over the last six years and you’re starting to see a lot of organization in terms of how democracies are going to defend themselves moving forward.
Ali Wyne: Teija, I want to come back to you. What impact have foreign influence operations had in the context of the war in Ukraine, both inside Ukraine but also globally?
Teija Tiilikainen: I think this is a full-scale war also in the sense that it is very much a war about narratives. So it is about whose story, whose narrative is winning this war. I must say that Russia has been quite successful with its own narrative if we think about how supportive the Russian domestic population still is with respect to the war and the role of the regime, of course there are other instruments also used.
Prior to the war started, Russia started to promote a very false story about what is going on in Ukraine. There was the argument about an ongoing Nazification of Ukraine. There was another argument about a genocide of the Russian minorities in Ukraine that was supposed to take place. And there was also a narrative about how Ukraine had become a tool in the toolbox of the Western alliance that is NATO or for the U.S. to exert its influence and how it was also used offensively against Russia. And these were of course all parts of the Russian information campaign – disinformation - with which it justified – legitimized - its war.
If we take a look at the information space in Europe or more broadly in Africa, for instance, today we see that the Western narrative about the real causes of the war, how Russia violated international law, the integrity and sovereignty of Ukraine, how this kind of real fact-based narrative is not doing that well. This proves the strength of foreign influence operations when they are strategic, well-planned, and of course when they are used by actors such as Russia and China that tend to cooperate and also outside the war zone, China is using this Russian narrative to put the blame for the war on the West and present itself as a reliable international actor.
So there were many elements in the war, not only the military activity, but I would in particular want to emphasize the role of these information operations.
Ali Wyne: It's sobering not only thinking about the impact of these disinformation operations, these foreign influence operations, but also, Teija, you mentioned the ways in which disinformation actors are learning from one another and I imagine that that trend is going to grow even more pronounced in the years and decades ahead. So thank you for that answer. Clint, from a technical perspective, what goes into recognizing information operations? What goes into investigating information operations and ultimately identifying who's responsible?
Clint Watts: I think one of the ironies of our work is, from a technical perspective, influence operations are detected by people. One of the big differences, especially we work with MSTIC, the cyber team and our team, is that our work of detecting malign influence operations is really about a human problem powered by technology. And that if you want to be able to understand and get your lead, we work more like a newspaper in many ways. We have a beat that we're covering, let's say, it's Russian influence operations in Africa. And we have real humans, people with master's degrees speak the language. I think the team speaks 13 languages in total amongst 28 of us. They sit and they watch and they get enmeshed in those communities and watch the discussions that are going on.
But ultimately we're using some technical skills, some data science to pick up those trends and patterns because the one thing that's true of influence operations across the board is you cannot influence and hide forever. Ultimately your position or, as Teija said, your narratives will track back to the authoritarian country - Russia, Iran, China - and what they're trying to achieve. And there's always tells common tells or context, words, phrases, sentences are used out of context. And then you can also look at the technical perspective. Nine years ago when we came onto the Russians, the number one technical indicator of Russian accounts was Moscow time versus U.S. time.
Ali Wyne: Interesting.
Clint Watts: They worked in shifts. They were posing as Americans, but talking at 2:00 AM mostly about Syria. And so it stuck out, right? That was a contextual thing. You move though from those human tips and insights, almost like a beat reporter though, to using technical tools. That's where we dive in. So that's everything from understanding associations of time zones, how different batches of accounts and social media might work in synchronization together, how they'll change from topic time and time again. The Russians are a classic of wanting to talk about Venezuela one day, Cuba the next, Syria the third day, the U.S. election the fourth, right? They move in sequence.
And so I think when we're watching people and training them, when they first come on board, it's always interesting. We try and pair them up in teams of three or four with a mix of skills. We have a very interdisciplinary set of teams. One will be very good in terms of understanding cybersecurity and technical aspects. Another one, a data scientist. All of them can speak a language and ultimately one is a former journalist or an international relations student that really understands the region and it's that team environment working together in person that really allows us to do that detection but then use more technical tools to do the attribution.
Ali Wyne: So You talked about identifying and attributing foreign influence operations and that's one matter, but how do you actually combat them? How do you combat those operations and what role, if any, can the technology industry play in combating them?
Clint Watts: So we're at a key spot, I think, in Microsoft in protecting the information environment because we do understand the technical signatures much better than any one government could probably do or should. There are lots of privacy considerations, we take it very serious at Microsoft, about maintaining customer privacy. At the same point, the role that tech can do is illustrative in some of our recent investigations, one of them where we found more than 30 websites, which were being run out of the same three IP addresses and they were all sharing content pushed from Beijing, but to local environments and local communities don't have any idea that those are actually Chinese state-sponsored websites.
So what we can do being part of the tech industry is confirm from a technical perspective that all of these things and all of this activity is linked together. I think that's particularly powerful. Also in terms of the cyber and influence convergence, we would say, we can see a leak operation where an elected official in one country is targeted as part of a foreign cyberattack for a hack and leak operation. We can see where the hack occurred. If we have good attribution on it, Russia and Iran in particular, we have very strong attribution on that and publish on it frequently, but then we can match that up with the leaks that we see coming out and where they come out from. And usually the first person to leak the information is in bed with the hacker that got the information. So that's another role that tech can play in particular about awareness of who the actors are, but what the connections are between one influence operation and one cyberattack and how that can change people's perspectives, let's say, going into an election.
Ali Wyne: Teija, I want to come back to you to ask you about a dilemma that I suspect that you and your colleagues and everyone who's operating in this space, a dilemma that I think many people are grappling with. And I want to put the question to you. I think that one of the central tensions in combating disinformation of course is preserving free speech in the nations where it exists. How should democracies approach that balancing act?
Teija Tiilikainen: This is a very good question and I think what we should keep in mind is what the malign actors are striving at is that they would like to see us starting to compromise our values, question our own values. So open society, freedom of speech, rule of law, democratic practices, and the principle of democracy, we should stick to our values, we should make sure that we are not going to the direction where the malign actors would want to steer us. But it is exactly as you formulate the question, how do we make sure that these values are not exploited against our broad societal security as is happening right now?
So of course there is not one single solution. The technological solution certainly can help us protect our society, broad awareness in society about these types of threats. Media literacies is the kind of keyword many times mentioned in this context. A totally new approach to the information space is needed and can be achieved through education, study programs, but also by supporting the quality media and the kind of media that relies on journalistic ethics. So we must make sure that our information environment is solid and that also in the future we'll have the possibility to make a distinction between disinformation and facts because it is - distinction is getting very blurred in a situation where there is a competition about narratives going on. Information has become a tool in many different conflicts that we have in the international space, but also in the domestic level many times.
I would like to offer the center's model because it's not only that we need cooperation between private actors, companies, civil society actors and governmental actors in states. We also need firm cooperation among like-minded states, sharing of best practices, learning. We can also learn from each other. If the malign actors do that, we should also take that model into use when it comes to questions such as how to counter, how to build resilience, what are the solutions we have created in our different societies? And this is why our center of excellence has been established exactly to provide a platform for that sharing of best practices and learning from each other.
So it is a very complicated environment in terms of our security and our resilience. So we need a multiple package of tools to protect ourselves, but I still want to stress the - our values and the very fact that this is what the malign actors would like to challenge and want us to challenge as well. So, let's stick to them.
Ali Wyne: Clint, let me come back to you. So we are heading into an electorally very consequential year. And perhaps I'm understating it, 2024 is going to be a huge election year, not only for the United States but also for many other countries where folks will be going to polls for the first time in the age of generative artificial intelligence. Does that fact concern you and how is artificial intelligence changing this game overall?
Clint Watts: Yeah, so I think it's too early for me to say as strange as it is. I remind our team, I didn't know what ChatGPT was a year ago, so I don't know that we know what AI will even be able to do a year from now.
Ali Wyne: Fair point, fair point.
Clint Watts: In the last two weeks, I've seen or experimented with so many different AI tools that I just don't know the impact yet. I need to think it through and watch a little bit more in terms of where things are going with it. But there are a few notes that I would say about elections and deepfakes or generative AI.
Since the invasion of Ukraine, we have seen very sophisticated fakes of both Zelensky and Putin and they haven't worked. Crowds, when they see those videos, they're pretty smart collectively about saying, "Oh, I've seen that background before. I've seen that face before. I know that person isn't where they're being staged at right now." So I think that is the importance of setting.
Public versus private I think is where we'll see harms in terms of AI. When people are alone and AI is used against them, let's say, a deepfake audio for a wire transfer, we're already seeing the damages of that, that's quite concerning. So I think from an election standpoint, you can start to look for it and what are some natural worries? Robocalls to me would be more worrisome really than a deepfake video that we tend to think about.
The other things about AI that I don't think get enough examination, at least from a media perspective, everyone thinks they'll see a politician say something. Your opening clip is example of it and it will fool audiences in a very dramatic way. But the powers of AI in terms of utility for influence operations is mostly about understanding audiences or be able to connect with an audience with a message and a messenger that is appropriate for that audience. And by that I mean creating messages that make more sense.
Part of the challenge for Russia and China in particular is always context. How do you look like an American or a European in their country? Well, you have to be able to speak the language well. That's one thing AI can help you with. Two, you have to look like the target audience to some degree. So you could make messengers now. But I think the bigger part is understanding the context and timing and making it seem appropriate. And those are all things where I think AI can be an advantage.
I would also note that here at Microsoft, my philosophy with the team is machines are good at detecting machines and people are good at detecting people. And so there are a lot of AI tools we're already using in cybersecurity, for example, with our copilots where we're using AI to detect AI and it's moving very quickly. As much as there's escalation on the AI side, there's also escalation on the defensive side. I'm just not sure that we've even seen all the tools that will be used one year from now.
Ali Wyne: Teija, let me just ask you about artificial intelligence more broadly. Do you think that it can be both a tool for combating disinformation and a weapon for promulgating disinformation? How do you view artificial intelligence broadly when it comes to the disinformation challenge?
Teija Tiilikainen: I see a lot of risks. I do see also possibilities and artificial intelligence certainly can be used as resilience tools. But the question is more about who is faster if the malign actors take the full advantage of the AI before we find the loopholes and possible vulnerabilities. I think it's very much about a hardcore question to our democracies. The day when an external actor can interfere efficiently into our democratic processes, to elections, to election campaigns, the very day when we cannot any longer be sure that what is happening in that framework is domestically driven, that they will be very dangerous for the whole democratic model, the whole functioning of our Western democracies.
And we are approaching the day and AI is, as Clint explained, one possible tool for malign actors who want to discredit not only the model, but also interfere into the democratic processes, affect outcomes of elections, topics of elections. So deepfakes and all the solutions that use AI, they are so much more efficient, so much faster, they are able to use so much - lots of data.
So I see unlimited possibilities unfortunately for the use of AI for malign purposes. So this is what we should focus on today when we focus on resilience and the resilience of our digital systems.
And this is also a highly unregulated field also at the international level. So if we think about weapons, if we think about military force, well, now we are in a situation of deep conflict, but before we were there we used to have agreements and treaties and conventions between states that regulated the use of weapons. Those agreements are no longer in a very good shape. But what do we have in the realm of cyber? This is at the international level a highly unregulated field. So there are many problems. So can only encourage and stress the need to identify the risks with these solutions. And of course we need to have regulation of AI solutions and systems in our states at the state level as well as hopefully at some point also international agreements concerning the use of AI.
Ali Wyne: I want to close by emphasizing that human component and ask you, as we look ahead and as we think about ways in which governments and private sector actors, individuals, and others in this ecosystem can be more effective at combating disinformation foreign influence operations, what kinds of societal changes need to happen to neutralize the impact of these operations? So talk to us a little bit more about the human element of this challenge and what kinds of changes need to happen at the societal level.
Teija Tiilikainen: I would say that we need a cultural change. We need to understand societal security very differently. We need to understand the risks and threats against societal security in a different way. And this is about education. This is about schools, this is about study programs at universities. This is about openness in media, about risks and threats.
But also in those countries that do not have the tradition. In the Nordic countries, here in Finland, in Scandinavia, we have a firm tradition of public-private cooperation when it comes to security policy. We are small nations and the geopolitical region has been unstable for a long time. So there is a need for public and private actors to share a same understanding of security threats and also cooperate to find common solutions. And I think I can only stress the importance of public-private cooperation in this environment.
We need more systematical forms of resilience. We have to ask ourselves what does resilience mean? Where do we start building resilience? Which are all the necessary components of resilience that we need to take into account? So that we have international elements, we have national elements, local elements, we have governmental and civil society parts, and they are all interlinked. There is no safe space anywhere. We need to kind of create comprehensive solutions that cover all possible vulnerabilities. So I would say that the security culture needs to be changed and it's not the security culture we tend to think about domestic threats and then international threats. Now they are part of the same picture. We tended to think about military, nonmilitary, also they are very much interlinked in this new technological environment. So new types of thinking, new types of culture. I would like to get back to university schools and try to engage experts to think about the components of this new culture.
Ali Wyne: Teija Tiilikainen, Director of the European Center of Excellence for Countering Hybrid Threats. Clint Watts, General Manager of Microsoft Threat Analysis Center. Teija, Clint, thank you both very much for being here.
Teija Tiilikainen: Thank you. It was a pleasure. Thank you.
Clint Watts: Thanks for having me.
Ali Wyne: And that's it for this episode of Patching the System. There are more to come. So follow Ian Bremmer's GZERO World feed anywhere you get your podcasts to hear the rest of our new season. I'm Ali Wyne. Thank you very much for listening.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
- Podcast: Cyber mercenaries and the global surveillance-for-hire market ›
- Podcast: How cyber diplomacy is protecting the world from online threats ›
- Attacked by ransomware: The hospital network brought to a standstill by cybercriminals ›
- Hacked by Pegasus spyware: The human rights lawyer trying to free a princess ›
- The threat of CEO fraud and one NGO's resilient response ›
- Why privacy is priceless - GZERO Media ›
- Would the proposed UN Cybercrime Treaty hurt more than it helps? - GZERO Media ›
- Podcast: Can governments protect us from dangerous software bugs? - GZERO Media ›
- Podcast: Cyber Mercenaries and the digital “wild west" - GZERO Media ›
Podcast: How cyber diplomacy is protecting the world from online threats
Listen: Just as bank robbers have moved from physical banks to the online world, those fighting crime are also increasingly engaged in the digital realm. Enter the world of the cyber diplomat, a growing force in international relations specifically focused on creating a more just and safe cyberspace.
In season 2 of Patching the System, we're focusing on the international systems and organizations of bringing peace and security online. In this episode, we're discussing the role of cyber diplomats, the threats they are combatting, and how they work with public and private sectors to accomplish their goals.
Our participants are:
- Benedikt Wechsler, Switzerland's Ambassador for Digitization
- Kaja Ciglic, Senior Director of Digital Diplomacy at Microsoft.
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “ Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
TRANSCRIPT: How cyber diplomacy is protecting the world from online threats
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
BENEDIKT WECHSLER: We have to be aware that although we are so familiar with the cyber and digital world, it's still a new technology. And I think we don't have that much time to develop these organizations and rules as we had for the maritime or for the airspace.
KAJA CIGLIC: This situation is both terrifying and sort of deteriorating, I would say, at the same time. Part of the reason is because the technology's evolving so fast. Every time there is a new tool put on the market, it can, and someone will try and test it as a weapon.
ALI WYNE: It is hard to overstate just how much we rely on digital technology and connectivity in our daily lives, from the delivery of essential services, including drinking water and electricity, to how we work, pay our bills, get our news. Increasingly, it all depends on an ever-growing cyberspace. But as humanity's digital footprint grows, cyberspace is also growing as a domain of conflict where attacks have the potential to bring down a power grid, where rattle the stock market, or compromise the data and security of millions of people in just moments.
Got your attention? Well, good.
Welcome to the second season of Patching the System, a special podcast from the Global Stage Series, a partnership between GZRO Media and Microsoft. I'm Ali Wyne, a senior analyst with Eurasia Group.
Now, whether you're a policy expert, you're a curious coder, or you're just a listener wondering if your toaster is plotting global domination, this podcast is for you. Throughout this series, we're highlighting the work of the Cybersecurity Technology Accord, a public commitment for more than 150 global technology companies dedicated to creating a safer cyber world for all of us.
Last season, we tackled some of the more practical aspects of cybersecurity, including protecting the Internet of Things and combating hackings and ransomware attacks. This time around, we're going global, talking about peace and security online, talking about how the international system is trying to bring stability and make sense of this new domain of conflict.
Digital transformation is happening at unprecedented speeds: AI, anyone? And policy and regulation need to evolve to keep up with that reality. Meanwhile, there has been widespread use of cyber operations in Russia's invasion of Ukraine, the first large-scale example of hybrid warfare. Well, what are the rules?
Enter the cyber diplomat. An increasing number of nations have them: ambassadors who are assigned not to a country or a region, but instead, assigned to addressing a range of issues online that require international cooperation. Many of these officials are based in Silicon Valley, and the European Union just recently opened a digital diplomacy office in San Francisco.
Meanwhile, the United States named its first Cyber Ambassador, Nathaniel Fick, to the State Department just last year. Here he is at a Council on Foreign Relations event describing the work of his office”
NATHANIEL FICK: Part of the goal here was to bring in not only one person, but a group of people with other perspectives, outside perspectives, in order to build something new inside the department. It's as close to a startup as you're going to get in a large bureaucracy like the Department of State. I think one of our goals again is to is to really restore public private partnership to a substantive term.
ALI WYNE: What do cyber diplomats do? Why do we need them, and how do they interact with private sector companies around the world?
I'm talking about this subject with Benedikt Wechsler, Switzerland's Ambassador for Digitization. And Kaja Ciglic, Senior Director of Digital Diplomacy at Microsoft. Welcome to you both.
BENEDIKT WECHSLER: Pleasure to be here.
KAJA CIGLIC: Thank you for having us.
ALI WYNE: Ambassador, let me begin with you. What does an Ambassador for Digitization do, and why did Switzerland feel that it was necessary as a diplomatic position?
BENEDIKT WECHSLER: Diplomats – or diplomacy - is one of the oldest professions in the world actually. And when our new minister came in four years ago, he wanted to know what is the world going to look like in about eight, 10 years. And there came up a foreign policy vision, which of course stated also that the digital world, the cyberspace will be ever more important. And so an ambassador is sent abroad to promote and protect interests of its citizens, companies, but also promote an international order, which is conducive to serving the best interests to not only his own country but the whole world.
But we didn't have a structure who deals with the digital world because there are new partners, there are new power centers, there's new actors, but the same interests and values were at stake. So we decided to set up a division for digitalization, which means exactly to promote interests for the citizens, Swiss companies, values, and human rights as well in that new field and develop that with new partners. So that is our key mission.
ALI WYNE: When most other folks hear the word ambassador, we are thinking about an ambassador to country X. I think it's really exciting that we now have an ambassadorial position for this critical priority. So, run us through some of the most pressing problems that you're tackling in this new role for a very new remit.
BENEDIKT WECHSLER: Normally, we always have a little tendency to fix our ideas on problems and security and risks. So that, of course, is the underlying most important issue. This digital space, cyberspace, has to be a safe space, otherwise, people don't want to engage in such a space. That's also a change from the sort of physical world to the digital world. I just recently read a report that in, Denmark, there has been no bank robbery anymore because there's no more banks where you can get money out. But, of course, they moved to cyberspace.
ALI WYNE: Right. Right.
BENEDIKT WECHSLER: Another little parallel what we're trying to do is - back to the age of their railways, there was a problem that train coaches were moving from one country to another, but they didn't have a means to lock or unlock these train coaches. So a group of experts sat together in Bern and devised a special key, which is still in function today, which was able to lock and unlock these wagons and make train connections safer. So that is exactly what we now have to do in the cyberspace: to find these key of Bern or the key of Geneva or the keys of wherever to make the digital cyberspace safe and workable.
ALI WYNE: Kaja, I want to come to you next, and just building off of the ambassador's remarks. You've said previously that cyber diplomacy is different from other traditional forms of diplomacy because it's multi-stakeholder. What do you mean by multi-stakeholder in this particular context and explain why Microsoft has what it calls a digital diplomacy team?
KAJA CIGLIC: So if you think about our origins of the internet, the internet has always been, from the onset, governed and set up by groups that are not always government. It includes governments, but includes academia, includes the private sector, includes various representatives of the civil society. And as the internet grew and became an ever-present part of our lives, it has meant that its governance structures grew with it.
Of course, governments, states, are there to determine what the regulations are. But because it is global, because it's a little bit like the ambassador was saying about the railways. We need to find ways for the trains not to just safely unlock and unload but to go from one country to another on the same tracks is also something that had to be figured out. I think that is where we are in the online space at the moment.
And currently, the vast majority of it is run and operated by the private sector. And that's why we say it has to be a different conversation that includes all these other stakeholders - hence the word multi-stakeholders - not just governments, which is more where traditional diplomatic conversations have been.
And Microsoft has identified this area as an area of interest and an area of a priority, almost 10 years ago now, when we first started talking about, "Okay, we need clear rules, particularly for safety and security online." Because as all aspects of our life are moving online, we need to make sure that they can continue operating more or less unthreatened.
ALI WYNE: And we talked a lot about this subject last season. We're going to be talking a lot again about this subject this season. Tell us what the Cybersecurity Tech Accord is and tell us how it fits into this conversation?
KAJA CIGLIC: Yeah, the Cybersecurity Tech Accord is a group of companies that effectively came together in 2018. At that point, at the onset, it was just above 30, and now it's just over 150 companies from all sizes from everywhere around the world that all agree that we need to be having this conversation, and we need to be making advances on how to secure peace and stability online not just now but for future generations.
And so the group came together around four fundamental principles that all the companies are committed to strong defense of their customers. All companies are committed to not conducting offensive operations in cyberspace. All companies are committed to capacity building. So sharing knowledge and understanding and that all companies are committed to working with each other and also with others in the community, not just the private sector companies. But like I said earlier, civil society groups, academia, governments to try and advance these values and goals.
ALI WYNE: Ambassador, I feel like, every week now, we learn about some state-sponsored hacking or cyber-attacks. You think about air, land, sea - we at least have some clear rules for state behavior based on principles such as recognized borders, sovereign airspace, international waters. Do we have similar international expectations and/or obligations to be respected in cyberspace?
BENEDIKT WECHSLER: Sometimes, I think it's really, we have to be aware that although we are so familiar with the cyber and digital world, it's still a new technology. And when you look at air to sea, this has been decades. And also, there, the governance and all the rules and norms have evolved over time, over decades and years. And I think we don't have that much time to develop these organizations and rules as we had for the maritime or for the airspace. But I think we have to - as Kaja said, it's a new specific and especially multi-stakeholder world and space, and we have to take that into account.
So we cannot just set up a new organization like we did in the old days, and then we think, "Well, we'll sit together among states, and we will negotiate something, and then we'll have a good glass of wine, and then we hope that everybody is going to abide by these rules." No, I think we have to have a whole toolbox or maybe a Swiss army knife with all sort of adapted tools for the difference today. So for instance, we negotiated a classical cybercrime convention in Vienna. There's a process of the Open-Ended Working Group at the UN about responsible behavior in cyberspace where all these norms are being developed and where we have also civil society and companies, the private sector being involved.
Then we have dialogues, for instance, the Sino-European Cyber Dialogue with China and the European states. We have it also with the United States. And there we are sort of defining, "Okay, international humanitarian law, what is protection of basic infrastructure?" So we're getting there. And I think also very importantly is that we engage very closely with the private sector because there's the knowledge, there's the innovation, so that we can really develop smart rules.
And then, lastly, I think we have to embrace much more also, the scientific world, because in science, there's so much progress and innovation and foresight that we have to take into account because this is going to happen much faster that this will become a reality. We cannot see where AI is going without also involving the scientific world.
ALI WYNE: This is the second season of Patching the System, and it's amazing. When I look back on the episodes that we recorded last year, it's extraordinary how much science has progressed, how much technology has progressed. And I suspect that the rate of that scientific and technological innovation, it's only going to grow with each year, but just an observation to say how rapidly these scientific and technological domains are…
BENEDIKT WECHSLER: You're right, and I think everybody was surprised. It's amazing.
ALI WYNE: I posed this question to Kaja earlier that we think about cyber diplomacy as being different from what we might call more “traditional” forms of diplomacy. But, in traditional diplomacy, ambassador, we often think of governments in aligned groups. So we think, for example, of the NATO alliance for security or we think of countries that support free markets and free speech versus those that support more state control. Are there similar alignments when it comes to matters of cyber diplomacy?
BENEDIKT WECHSLER: Yes, definitely. I mean, that is no secret. I think you have like-minded countries also in the tech and the digital space because we want to see technology being an enabler for better reaching sustainable development goals, expanding freedom, strengthening human rights, and not undermining human rights. And of course, there are countries in the world who have a contrary view and position.
On the other hand, I think it's interesting to see in mankind and international relations, there have always been antagonistic situations. But still, there was always some agreement consensus on certain things that, as humanity, we have to stick together. And even, I mean, in the coldest times of the Cold War, there was collaboration in space between the U.S. and the Soviet Union.
And I think we also feel a little bit that with this digital world, the internet, nobody has really cut itself off this world because they know it's just too important. And we have to build on this common heritage and common base that other countries are pursuing other ways and using this technology. There are things in warfare that we decided we shouldn't do, and we should keep and stick to this and maybe develop it where needed, but especially keeping the commitments also in the online cyber world.
ALI WYNE: Kaja I want to bring you back into the conversation. So from Microsoft's point of view, what is your overall sense of the trend line when it comes to nation-state activity online? Are we moving more towards order? Are we moving more towards chaos? And what can industry, including companies such as Microsoft, what can industry do to support the kind of diplomacy that the ambassador has described to advance what you might call a rules-based international order as it were online?
KAJA CIGLIC: I would probably say it's a bit of both. This situation is both terrifying and sort of deteriorating, I would say, at the same time. Part of the reason is because the technology's evolving so fast.
ALI WYNE: Right.
KAJA CIGLIC: And so, as a result, it means that every time there is a new tool put on the market, so to say, it can, and someone will try and test it as a weapon. So I think that's the reality of human nature. And we are seeing that the deteriorating situation is also reflecting what's going on in the offline world, right. I think, at the moment, in terms of geopolitics, we're not in the best place that we have ever been, and that's reflected online. At the same time, I would say not everything is super bleak. As the ambassador was saying, we do have rules. We have international law. We have human rights commitments. We have International Humanitarian Law, and while we do see these being breached, they're not being breached by the vast majority of countries.
They're being breached by a very small minority of countries. And I think, increasingly, we're seeing states that believe in the values of international law, that believe in these commitments that have been made in other domains over the past hundreds of years as important to reinforce and support in the online world. And as a result, they’re calling out bad behavior, they're calling out breaches of international law, and that's a very positive development.
But that doesn't mean that we should be complacent, right. I think, increasingly, states are seeing cyber as the conflict. Increasingly, we're seeing the private sector developing tools and weapons that are being used for offensive purposes.
Cyber mercenaries are effectively a new market that has emerged over the past five or so years and is booming because there is such an appetite for those type of technologies by governments. And I think to your last question in terms of how the industry can help and support - some of it is just we can share what we see. The big companies, in particular, we are often the vector through which the other governments or the targets get attacked. They use Microsoft systems. They operate on a cloud platform. So we see both the actors, we see the trends, and we see the emerging new techniques. And I think that's important for sort of the foreign policy experts around the world to be aware of, understand, and be able to act upon.
ALI WYNE: Ambassador, I want to come back to you. How do you think that the tech sector, in particular, should engage with the work of cyber diplomats such as yourself?
BENEDIKT WECHSLER: I think one important aspect is that we are dealing here with an infrastructure issue. It's not just a tool or a product. It's an infrastructure and a vital infrastructure. And I think that also implies then how the tech companies should and could be part of that. So I think they should build this infrastructure together. And we, at the same time, can learn a lot from the tech companies. Also, internally, for instance, before I took up this position, I never heard of the expression red teaming. But I mean, that's a whole way of working and making products safe, like how you check a car before you put it on the market.
And I think if we work together and adapt these red teaming processes so that we also involve human rights aspects and other safety aspects so that the products that really will come to the market are already in a state developed that they are not being able to use for some malign purposes. And I think we also have to think of new forms of governance where the tech sector is really a responsible constituting part of a governance and not just looking at an issue in terms of maybe a lobby perspective or how can we influence regulation in that or that sense, but to really build the whole house together.
ALI WYNE: Kaja, I think that we often talk about cyber operations in peacetime, and it's an entirely separate matter, different matter when we're talking about cyber operations being used in armed conflict, and Microsoft has been doing a lot of work reporting on Russia's use of cyber attacks in Ukraine. What has it looked like to integrate cyber operations in war - I think really, for the first time - what has the impact been?
KAJA CIGLIC: I think, definitely, for the first time at this scale where we're talking about use of cyber in a conflict, the impact has, in effect, been tremendous. As we look at even just before the war began, the Russians have effectively either prepositioned for espionage purposes or began doing destructive operations in Ukraine that supported their military goals. Over the past year and a half now, we've seen a level of coordination between attacks that are conducted online, so cyber attacks, including on civilian infrastructure, not just as part of the military operations and attacks that were then conducted by traditional military means - so effectively bombs.
And so we've seen definitely a level of similar targets attacked in a similar time period in a specific part of the country. So the alignment between cyber and kinetic operations in war has been, to that extent, something we've never, I think, seen not just Microsoft, but I think in general. The other thing to think about and consider is frequently the Russians have used foreign influence operations, so disinformation, as part of their war effort, often time in connection with cyber operations as well.
This is a tool, a technique that the Russians have used in the past as well. If you look at sort of their traditional foreign influence operation, just not online, in the 70s and the 80s, and that has transposed over to the hack and leak world, and they've used it to both weaken the Ukrainian resolve as well as to undermine their support abroad, particularly, in Europe but elsewhere as well.
And the only reason I would say neither of those have been nearly as successful as perhaps has been expected is the unexpected but wonderful ability for both Western governments and the private sector sort of across the board, irrespective of companies being competitors or anything like that, to come to Ukrainians’ defense.
We think that a lot of the attacks have been blunted also because the Ukrainian government very quickly, at the beginning of the war, decided to migrate a lot of the government data to the cloud, again, with Microsoft but also with competitors, and were thus able to effectively protect and continue operating the government normally, but from abroad.
ALI WYNE: So cyber offenses and cyber defenses it seems are increasing in parallel. So we have this kind of tit-for-tat game. Ambassador, considering this example of hybrid warfare in Ukraine, what are the lessons of the diplomatic community moving forward? And assuming that future armed conflicts will also have similar cyber elements to them, how should the international system prepare?
BENEDIKT WECHSLER: I mean, there's a component of probably the classic disarmament processes. Normally, I think you can expect every sort of nation or state would like to have a position of superiority just to feel safe and to be smarter and more capable than the others so that an attacker wouldn't dare to attack them.
But we arrived in the nuclear arms race to a stage where we had to say this is MAD - it's Mutually Assured Destruction – and although we still have an edge probably, I think, in the cyber world, we can almost, if they really want, we can sort of kill ourselves mutually. So that understanding comes to leads you to a point where probably also states will accept, "Okay, but let's not kill each other totally."
ALI WYNE: That's a good starting point.
BENEDIKT WECHSLER: Yeah.
KAJA CIGLIC: That would be good, yeah.
BENEDIKT WECHSLER: So we'll have to ban some things. Okay. Maybe some thing we just have to sit together, "Well, we should outright ban this." And then we have other things where we cannot ban it, but we have to reduce the negative impact on civilians, on critical infrastructure, on vulnerable persons, and so forth. And so then we come into the story of the International Committee of the Red Cross, the Geneva Conventions. If we can't ban or eliminate war, let's see, at least that we can make it as least impactful for everybody. And of course, now we are coming into totally new terrains with AI as well, the autonomous lethal weapons systems with the drones.
And also, I mean, when you think of the satellites issue, which when you think of a company like Starlink who can more or less decide, "Well, now we can't give you coverage anymore,” so then basically your operation will stop because you don't have the infrastructure anymore to launch an attack. But I think it's something that we have to tackle in the logic of the disarmament on banning or mitigating or limiting effects, but also on very specific items. So we had the Landmines Convention. We had the issue of certain ammunitions that we wanted to be banned. So I think it's going to be very hard, thorny work of diplomats to try to limit this to a maximum possible extent.
ALI WYNE: Even beyond the weaponization of cyberspace, I mean just technology itself is constantly evolving. I mean, just this year alone, we've seen a real explosion in generative AI. As a result, a rush from both governments and the private sector to find a framework, some kind of regulatory framework. How do you view this new factor in terms of cyber diplomacy? How does this new factor affect the work that you're doing on a daily basis?
BENEDIKT WECHSLER: Well, I heard from somebody saying, "AI is the new digital." And, of course, we are also trying to see how can we develop tools based on AI to make diplomacy more efficient, also to make it a tool to provide more consensus on issues because you can probably gather more information, more data to show, "Well, we have a common interest here." And we launched a project. We call it the Swiss Call on Trust and Transparency in AI, where we are not looking into new regulations but rather what kind of formats of collaboration, what kind of platforms that we need to build up in order to get more trust and transparency.
And that builds a lot also on what has been done in the area of cybersecurity, on actions against ransomware. And again, what also Kaja said, it's about that the companies and diplomats or the governments are working together and sharing expertise because it's not a question of competition between private sector, but again, because building an infrastructure, a building that has to be solid and then within that infrastructure we can compete again.
ALI WYNE: Kaja, I want to bring you back into the conversation, and let's just zero in particular on the implications of artificial intelligence for security. How does Microsoft think that AI will play into concerns around escalating cyber conflict? And will AI, I mean, effectively just pour gasoline on the fire?
KAJA CIGLIC: I really don't think so. I think we actually have great opportunity to gain a little bit of an asymmetric advantage as defenders in this space. The reason is, while obviously malicious actors will abuse AI and probably are abusing AI today, we are using AI already, and we'll continue to use it to defend.
In Ukraine, we're using AI to help our defenders on cybersecurity. Microsoft gets 65 trillion - I think, it's some absurd number - signals on Microsoft platforms daily of what's going on online. Obviously, humans can't go through all of that to identify anomalies and neutralize threats. Technology can and technology is, right. So the AIs understand what's wrong - and this has happened already, but this will improve it even further - are looking at, "Oh, okay, this malware attack looks similar to something that has happened before. So I will preemptively stop it independently” right? I think that will actually help us in terms of cybersecurity.
ALI WYNE: Tell us one concrete step that you would like to see taken to get us somewhat closer to a sustainable diplomatic framework for cyber. Kaja, let's start with you, and then we'll bring the ambassador to close this out.
KAJA CIGLIC: I think it'd be really important for the UN to recognize this as a real issue. I think there is a bunch of working groups. We've seen the Secretary General make statements about and call on states to the war. But a permanent body effectively within the United Nations that would discuss some of these issues would be very welcome. At the moment, there's a lot of working groups or group of governmental experts that kind of get renewed for every five or so years, and there's not a dedicated effort necessarily focused on some of these issues.
And then, of course, we would love to find a way for the industry, but the multi-stakeholder community writ large to be able to participate and share their insights and knowledge in this area. Like I was saying at the beginning, there are opportunities. I would say Microsoft and many other private sector groups get blocked a fair amount by certain states. And I get it's a political decision at some level, but it's something that we'd really, really like to see institutionalized - both a process and the multi-stakeholder inclusion.
BENEDIKT WECHSLER: I see sort of a historic window of opportunity opening up with the works on the Global Digital Compact. We have the Summit For The Future next year, the Common Agenda. So a little bit like with the SDGs that we as the world community are coming together and say, "Okay, this is really too important. We are all in this together." Maybe also movies like Oppenheimer are reminding us of some things in the past.
And I'd like to close with Albert Einstein, who said in the 30s of last century that, "Technology advances could have made human life carefree and happy if the development of the organizing power of men, back then and women, I would say today, had been able to keep step with its technical advances. Instead, the hardly bought achievements of the machine age in the hands of our generation are as dangerous as a razor in the hands of a three-year-old child." So I hope that we see this urgency, but also this huge opportunity that we had already once as a humanity, but that we don't fully grasp it this time.
ALI WYNE: KAJA CIGLIC, Senior Director of Digital Diplomacy at Microsoft, Ambassador BENEDIKT WECHSLER, Switzerland's Ambassador for Digitization. Thank you so much for taking the time to speak with me. Thank you so much for taking the time to enlighten our audience. It's been a real pleasure.
KAJA CIGLIC: Thank you. This was a great conversation.
BENEDIKT WECHSLER: Thank you. It was a privilege to be with you.
ALI WYNE: And that's it for this episode of Patching the System. There are more to come. So follow Ian Bremmer's GZERO World feed anywhere you get your podcast to hear the rest of this new season. I'm Ali Wyne. Thanks very much for listening.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
- Podcast: Cyber Mercenaries and the digital “wild west" ›
- Attacked by ransomware: The hospital network brought to a standstill by cybercriminals ›
- Hacked by Pegasus spyware: The human rights lawyer trying to free a princess ›
- The threat of CEO fraud and one NGO's resilient response ›
- Podcast: Cyber mercenaries and the global surveillance-for-hire market - GZERO Media ›
- Podcast: Foreign influence, cyberspace, and geopolitics - GZERO Media ›
- Podcast: Would the proposed UN Cybercrime Treaty hurt more than it helps? - GZERO Media ›
- Podcast: Can governments protect us from dangerous software bugs? - GZERO Media ›
Private sector partnership key to funding digital access for all
To connect the next two billion people to the internet, funding is crucial – and not the small type. The same goes for creating a global warning system that uses satellite data to preempt global disasters. To accomplish these enterprise projects, the UN requires a massive financial war chest.
Few understand the scale better than Axel Van Trotsenburg, the World Bank's Senior Managing Director. But with private-sector partnerships, it can be done, he noted during a Global Stage livestream event at UN headquarters in New York on September 22, on the sidelines of the UN General Assembly.
"In Africa, the African Union has taken decisions on the digitalization," he said, "I think we need to scale this massively, and I think it is doable and you see in countries like Kenya that have very sophisticated payment systems, sometimes better than in OECD countries."
The discussion was moderated by Nicholas Thompson of The Atlantic. It was held by GZERO Media in collaboration with the United Nations, the Complex Risk Analytics Fund, and the Early Warnings for All initiative.
Watch the full Global Stage conversation: Can data and AI save lives and make the world safer?
"Access is a fundamental right" - Digital activist Vilas Dhar
The world is fast becoming increasingly digital, with 60% of global GDP driven by digital participation, but over two billion people still lack basic connectivity access.
Vilas Dhar, a leading activist for a more equitable tech-enabled world, emphasizes three elements contributing to this divide: connectivity, data gaps, and technical capacity.
“Access is a fundamental right and not something to be solved by delivering a last mile piece of fiber or connectivity.” he commented during a Global Stage livestream event at UN headquarters in New York on September 22, on the sidelines of the UN General Assembly.
Dhar also acknowledges the growing concern of artificial intelligence and the question of who will lead regulation.
“We live in a world where AI is in every headline, and we absolutely acknowledge that the vast majority of AI capacity is held in private sector tech companies. This is in and of itself a digital divide.”
The discussion was moderated by Nicholas Thompson of The Atlantic and was held by GZERO Media in collaboration with the United Nations, the Complex Risk Analytics Fund, and the Early Warnings for All initiative.
Watch the full Global Stage conversation: Can data and AI save lives and make the world safer?
- Should internet be free for everyone? A Global Stage debate ›
- The fight to “connect every last person” to the internet ›
- COVID upended the job market & focused employers on skills ›
- 2 billion new internet users joined in 5 years but growth is uneven ›
- US-China tech tensions: the impact on the global digital landscape ›
- The digitalization divide: opportunities and challenges in emerging markets ›
2 billion new internet users joined in 5 years but growth is uneven
A whopping two billion new internet users have come online in the past five years. This transformative shift, driven in part by the pandemic, has revolutionized the way people learn and work. But it’s important to note that this growth is not evenly distributed, and significant efforts are required, particularly in Africa, to bridge the digital divide, says Digital Impact Alliance CEO Priya Vora.
Vora emphasizes the importance in addressing issues of trust, individual agency, and data privacy as the digital world continues its rapid expansion. She also touches on the changing landscape of digital commerce, where a few dominant players could translate economic power into political influence. As the conversation and challenges surrounding the digital world evolve, so too should the global response, says Vora.
Vora joined other geotech experts in a GZERO livestream event, presented by Visa, to discuss the challenges and opportunities that nation-states face when it comes to digitization, and how it could shape a more inclusive and resilient future.
Watch the full livestream conversation: What Ukraine's digital revolution teaches the world
What Ukraine's digital revolution teaches the world
The threat of the Russian bear has been putting its neighbors on edge for years, and while plenty has been spent on beefing up their militaries, there’s now a whole other line of defense: digitization. Kyiv has harnessed its digital technology to provide government services to a whopping 19 million Ukrainians, despite daily bombings and devastation.
How has digitization helped Ukraine navigate first a pandemic and now a war? What lessons can be learned by other countries? GZERO asked geotech experts in a livestream event, presented by Visa, about the challenges and opportunities that nation-states face when it comes to digitization, and how it could shape a more inclusive and resilient future. The event was moderated by Goodpods' JJ Ramberg. Watch the full discussion above.
In 2020, Ukraine launched Diia, a mobile application that connects Ukrainians to more than 120 government services – from digital driver’s licenses to business filings to tax payments, says Mohamed Abdel-Kader, who helms the Innovation, Technology, and Research Hub at USAID. He and his teams help countries adopt new technological innovations in effective ways. Since the war, the app – also forged in partnership with UK Aid, Eurasia Foundation, and private sector partners – has also helped address wartime needs. For example, it has helped Ukrainians claim benefits for war-related property damage, and, at times, has been used to broadcast news and video when other networks are down. It has also helped with Ukraine’s mobilization and enabled the reporting of Russian troop sightings.
Unsurprisingly, it was another former Soviet republic, Estonia, that emerged as a pioneer in digitization back in the 2000s. Being so close to that Russian bear – it’s just a three-hour drive from St. Petersburg to the Estonian border town of Narva – helped focus the digital mind. The country was already a digital leader, but in 2007, Estonia withstood a month-long cyberattack that crippled its government systems and digital infrastructure. There was speculation of Russian involvement, but it was a “blessing in disguise,” says Carmen Raal, a digital transformation adviser at e-Estonia, the country’s electronic government services. This is because it forced the country to focus on cybersecurity before it was of critical importance to private and public sectors. The resulting innovations, she adds, have made Estonia a world leader in cyber security.
Today, Estonia offers 99.99% of its services online, Raal says, including universal online banking solutions and super-quick business setups. It takes “less than three hours to establish a company and, of course, it can be done fully online,” she adds.
What’s more, Estonia offers something called e-Residency, enabling individuals from anywhere in the world to apply online for a digital identity with which they can establish businesses in Estonia. “We have over 100,000 e-residents, and they have established over 27,000 Estonian companies,” Raal texted me after the event.
When it comes to digitization, both Ukraine and Estonia offer models of early investment, evolution, responses to security crises, and government-citizen relations, while the Estonian services reflect how governments can help residents (near and far) embrace economic opportunity. But globally, there remains plenty of work to ensure access, equality, and trust.
Digitization, after all, is great for those in the game, but as the rich get richer, governments need to ensure everyone has access. Last year, 60% of global GDP was generated by digitally enabled businesses, and in the last five years, the world has added two billion new internet users.
As the trend of digitalizing continues, with technology transforming businesses and work, says Eurasia Group's Geotechnology Director Alexis Serfaty, “it’s absolutely going to create new opportunities and new wealth, especially, I think, in emerging markets with younger populations and with more robust digital public infrastructure." In other words, there has been record growth in access, but there are still 2.5 billion people worldwide who are not online, and better systems and expanded access could translate into exponential growth – if done well.
Expanding trust in digital systems is also essential – only 62% of those GZERO surveyed before the livestream said they had faith in tech companies protecting their data.
Challenges of building secure public infrastructure that expands participation in inclusive ways, says Priya Vora, CEO of Digital Impact Alliance, are universal, and there’s a lot more work to be done. “I don’t think any country has figured it out yet,” she says.
Digital accessibility and digitization of services, along with digital skills, government oversight, and user trust need to grow in tandem to foster greater economic growth.
Hitting the right balance of government oversight will be key to getting this all right, says Rajiv Garodia, Visa’s global head of government solutions, who works with governments around the globe on digitization. He calls for a sound economic model and infrastructures designed with operational resilience, along with clear roles and responsibilities laid out for both the public and private sectors.
Global crises, like the ongoing war in Ukraine, also present opportunities for innovation and growth. Despite its current plight, Kyiv is making strides in a key sector that’s only becoming more important.
Watch the video above to hear the experts discuss the power of digitization – both amid war and for public and private sector growth.
Be very scared of AI + social media in politics
Why is artificial intelligence a geopolitical risk?
It has the potential to disrupt the balance of power between nations. AI can be used to create new weapons, automate production, and increase surveillance capabilities, all of which can give certain countries an advantage over others. AI can also be used to manipulate public opinion and interfere in elections, which can destabilize governments and lead to conflict.
Your author did not write the above paragraph. An AI chatbot did. And the fact that the chatbot is so candid about the political mayhem it can unleash is quite troubling.
No wonder, then, that AI, powered by social media, is Eurasia Group’s No. 3 top risk for 2023. (Fun fact: The title, “Weapons of Mass Disruption,” was also generated in seconds by a ChatGPT bot.)
How big a threat to democracy is AI? Well, bots can't (yet) meddle in elections or peddle fake news to influence public opinion on their own. But authoritarians, populists, and opportunists can deploy AI to help do both of these things better and faster.
Philippine President Ferdinand Marcos Jr. relied heavily on his troll army on TikTok to win the votes of young Filipinos in the 2022 election. Automating the process with bots would allow him, or any politician with access to AI, to cast a wider net and leap into viral conversations almost immediately on a social platform that already runs on an AI-driven algorithm.
Another problem is deepfakes, videos of people whose faces or bodies are altered to make them appear as if they are someone else, typically intended for political disinformation (check out Jordan Peele's Obama). AI now makes them so well that they are very hard to spot. Indeed, DARPA — the same Pentagon agency that brought us the internet — is perfecting its own deepfakes in order to develop tech to help detect what’s real and what’s fake.
Still, the "smarter" AI gets at propagating lies on social media, and the more widespread its use by shameless politicians, the more dangerous AI becomes. By the time viral content is proven to be fake, it might already be too late.
Imagine, let's say, that supporters of Narendra Modi, India's Hindu nationalist PM, want to fire up the base by fanning sectarian flames. If AI can help them create a half-decent deepfake video of Muslims slaughtering a cow — a sacred animal for Hindus — that spreads fast enough, the anger might boil over before people check if the clip is real, if they even trust someone at all to independently verify it.
AI can also disrupt politics by getting bots to do stuff that only humans, however flawed, should. Indeed, automating the political decision-making process "can lead to biased outcomes and the potential for abuse of power," the bot explains.
That’s happening right now in China, an authoritarian state that dreams of dominating AI and is already using the tech in court. Once the robot judges are fully in sync with Beijing's Orwellian social credit system, it wouldn’t be a stretch for them to rule against people who've criticized Xi Jinping on social media.
So, what, if anything, can democratic governments do about this before AI ruins everything? The bot has some thoughts.
"Governments can protect democracy from artificial intelligence by regulating the use of AI, ensuring that it is used ethically and responsibly," it says. "This could include setting standards for data collection and usage, as well as ensuring that AI is not used to manipulate or influence public opinion."
Okay, but who should be doing the regulating, and how? For years, the UN has been working on a so-called digital Geneva Convention that would set global rules to govern cyberspace, including AI. But the talks have been bogged down by (surprise!) Russia, whose president, Vladimir Putin, warned way back in 2017 that the nation that leads in AI will rule the world.
Governments, the bot adds, “should also ensure that AI is transparent and accountable, and that its use is monitored and evaluated. Finally, [they] should ensure that AI is used to benefit society, rather than to undermine it."
The bot raises a fair point: AI can also do a lot of good for humanity. A good example is how machine learning can help make us live healthier and longer by detecting diseases earlier and improving certain surgeries.
But, as Eurasia Group's report underscores, "that's the thing with revolutionary technologies, from the printing press to nuclear fission and the internet — their power to drive human progress is matched by their ability to amplify humanity's most destructive tendencies."
- AI at the tipping point: danger to information, promise for creativity - GZERO Media ›
- Senator Mitt Romney on Tiktok: shut it down - GZERO Media ›
- Can the US stay ahead of China on AI? - GZERO Media ›
- Toxic social media & American divisiveness - GZERO Media ›
- Politics, trust & the media in the age of misinformation - GZERO Media ›
- Podcast: The past, present and future of political media - GZERO Media ›
- Can we trust AI to tell the truth? - GZERO Media ›
- Ian Bremmer: Algorithms are now shaping human beings' behavior - GZERO Media ›
- How AI can be used in public policy: Anne Witkowsky - GZERO Media ›
- AI's role in the Israel-Hamas war so far - GZERO Media ›
- UK AI Safety Summit brings government leaders and AI experts together - GZERO Media ›
- AI in the hands of evil masterminds - GZERO Media ›
- How is the world tackling AI, Davos' hottest topic? - GZERO Media ›
- This year's Davos is different because of the AI agenda, says Charter's Kevin Delaney - GZERO Media ›
- Podcast: Talking AI: Sociologist Zeynep Tufekci explains what's missing in the conversation - GZERO Media ›