Search
AI-powered search, human-powered content.
scroll to top arrow or icon

Podcast: Foreign influence, cyberspace, and geopolitics

Global Stage Podcast | Patching the System | Foreign Influence, Cyberspace, and Geopolitics

TRANSCRIPT: Foreign Influence, Cyberspace, and Geopolitics

Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.

Teija Tiilikainen: What the malign actors are striving at is that they would like to see us starting to compromise our values, so question our own values. We should make sure that we are not going to the direction where the malign actors would want to steer us.

Clint Watts: From a technical perspective, influence operations are detected by people. Our work of detecting malign influence operations is really about a human problem powered by technology.

Ali Wyne: When people first heard this clip that circulated on social media, more than a few were confused, even shocked.

AI VIDEO: People might be surprised to hear me say this, but I actually like Ron DeSantis a lot. Yeah, I know. I'd say he's just the kind of guy this country needs, and I really mean that.

Ali Wyne: That was not Hillary Clinton. It was an AI-generated deepfake video, but it sounded so realistic that Reuters actually investigated it to prove that it was bogus. Could governments use techniques such as this one to spread false narratives in adversary nations or to influence the outcomes of elections? The answer is yes, and they already do. In fact, in a growing digital ecosystem, there are a wide range of ways in which governments can manipulate the information environment to push particular narratives.

Welcome to Patching the System, a special podcast from the Global Stage series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us. Today, we're looking at the growing threat of foreign influence operations, state-led efforts to misinform or distort information online.

Joining me now are Teija Tiilikainen, Director of the European Center of Excellence for Countering Hybrid Threats, and Clint Watts, General Manager of the Microsoft Threat Analysis Center. Teija, Clint, welcome to you both.

Teija Tiilikainen:Thank you.

Clint Watts: Thanks for having me.

Ali Wyne: Before we dive into the substance of our conversation, I want to give folks an overview of how both of your organizations fit into this broader landscape. So Clint, let me turn to you first. Could you quickly explain what it is that Microsoft's Threat Analysis Center does, what its purpose is, what your role is there, and how does its approach now differ from what Microsoft has done in the past to highlight threat actors?

Clint Watts: Our mission is to detect, assess and disrupt malign influence operations that affect Microsoft, its customers and democracies. And it's quite a bit different from really how Microsoft has handled it up until we joined. We were originally a group called Miburo and we had worked in disrupting malign influence operations until we were acquired by Microsoft about 15 months ago.

And the idea behind it is we can connect what's happening in the information environment with what's happening in cyberspace and really start to help improve the information integrity and ecosystem when it comes to authoritarian nations that are trying to do malign influence attacks. Every day, we're tracking Russia, Iran, and China worldwide in 13 languages in terms of the influence operations they do. That's a combination of websites, social media hack and leak operations is a particular specialty of ours where we work with the Microsoft Threat Intelligence Center. They see the cyberattacks and we see the alleged leaks or influence campaigns on social media, and we can put those together to do attribution about what different authoritarian countries are doing or trying to do to democracies worldwide.

Ali Wyne: Teija, I want to pose the same question to you because today might be the first that some of our listeners are hearing of your organization. What is the role of the European Center of Excellence for Countering Hybrid Threats?

Teija Tiilikainen: So this center of excellence is an intergovernmental body that has been established originally six years ago by nine governments from various EU and NATO countries, but today covers 35 governments. So we cover 35 countries, EU and NATO allies, all, plus that we cooperate closely with the European Union and NATO. Our task is more strategic, so we try to analyze the broad hybrid threat activity and with hybrid threats, we are referring to unconventional threat forms, election interference, attacks against critical infrastructures, manipulation of the information space, cyber and all of that. We create capacity, we create knowledge, information about these things and try to provide recommendations and share best practices among our governments about how to counter these threats, how to protect our societies.

Ali Wyne: We're going to be talking about foreign influence operations throughout our conversations, but just first, let's discuss hybrid conflicts such as what we're seeing and what we have seen so far in Ukraine. And I'm wondering how digital tactics and conflicts have evolved. Bring us up to speed as to where we are now when it comes to these digital tactics and conflicts and how quickly we've gotten there.

Teija Tiilikainen: So it's easier to start with our environment where the societies rely more and more on digital solutions, information technology that is a part of our societies. We built that to the benefit of our democracies and their economies. But in this deep conflict where we are, conflict that has many dimensions, one of them being the one between democracies and authoritarian states, that changes the role of our digital solutions and all of a sudden we see how they have started to be exploited against our security and stability. So it is about our reliance on critical infrastructures, there we have digital solutions, we have the whole information space, the cyber systems.

So this is really a new - strengthening new dimension in the conflicts worldwide where we talk more and more about the need to protect our vulnerabilities and the vulnerabilities more and more take place in the digital space, in the digital solutions. So this is a very kind of technological instrument, more and more requiring advanced solutions from our societies, from us, very different understanding about threats and risks. If we compare that with the kind of more traditional one where armed attacks, military operations used to form the threat number one, and now it is about the resilience of our critical infrastructures, it is about the security and safety of our information solutions. So the world looks very different and ongoing conflicts do that as well.

Ali Wyne: And that word, Teija, that you use, resilience, I suspect that we're going to be revisiting that word and that idea quite a bit throughout our conversation. Clint, let me turn to you now and ask how foreign influence operations in your experience, how have they evolved online? How are they conducted differently today? Are there generic approaches or do different countries execute these operations differently?

Clint Watts: So to set the scene for where this started, I think the point to look at is the Arab Spring. When it came to influence operations, the Arab Spring, Anonymous, Occupy Wall Street, lots of different political movements, they all occurred roughly at the same time. We often forget that. That's because social media allowed people to come together around ideas, organize, mobilize, participate in different activities. That was very significant, I think, for nation states, but one in particular, which was Russia, which was a little bit thrown off by, let's say, the Arab Spring and what happened in Egypt, for example. But at the same point intrigued about what would the ability to be able to go into a democracy and infiltrate them in the online environment and then start to pit them against each other.

That was the entire idea of their Cold War strategy known as active measures, which was to go into any nation, particularly in the United States or any set of alliances, infiltrate those audiences and win through the force of politics rather than the politics of force. You can't win on the battlefield, but you can win in their political systems and that was very, very difficult to do in the analog era. Fast-forward to the social media age, which we saw the Russians do, was take that same approach with overt media, fringe media or semi-covert websites that look like they came from the country that was being targeted. And then combine that with covert troll accounts on social media that look like and talk like the target audience and then add the layer that they could do that no one else had really put together, which was cyberattacks, stealing people's information and timing the leaks of information to drive people's perceptions.

That is what really started about 10 years ago, and our team picked up on it very early in January 2014 around the conflict in Syria. They had already been doing it in the conflict in Ukraine 10 years ago, and then we watched it move towards the elections: Brexit first, the U.S. election, then the French and German elections. This is that 2015, '16, '17 period.

What's evolved since is everyone recognizing the power of information and how to use it and authoritarians looking to grab onto that power. So Iran has been quite prolific in it. They have some limitations, they have resource issues, but they still do some pretty complex information attacks on audiences. And now China is the real game changer. We just released our first East Asia report where we diagrammed what we saw as an incredible scaling of operations and centralization of it.

So looking at how to defend, I think it's remarkable that, like Teija mentioned, in Europe, a lot of countries that are actually quite small, Lithuania, for example, has been able to mobilize their citizens in a very organized way to help as part of the state's defense, come together with a strategy, network people to spot disinformation and refute it if it was coming from Russia.

In other parts of the world though, it's been much, much tougher, particularly even the United States where we've seen the Russians and other countries now infiltrate into audiences and you're trying to figure out how to build a coherent system around defending when you have a plurality of views and identities and different politics. It's actually somewhat more difficult, I think, the larger a country is to defend, particularly with democracies.

And then the other thing is you've seen the resilience and rebirth of alliances and not just on battlefields like NATO, but you see NATO, the EU, the Hybrid CoE, you see these groups organizing together to come through with definitions around terminology, what is a harm to democracy, and then how best to combat it. So it's been an interesting transition I would say over the last six years and you’re starting to see a lot of organization in terms of how democracies are going to defend themselves moving forward.

Ali Wyne: Teija, I want to come back to you. What impact have foreign influence operations had in the context of the war in Ukraine, both inside Ukraine but also globally?

Teija Tiilikainen: I think this is a full-scale war also in the sense that it is very much a war about narratives. So it is about whose story, whose narrative is winning this war. I must say that Russia has been quite successful with its own narrative if we think about how supportive the Russian domestic population still is with respect to the war and the role of the regime, of course there are other instruments also used.

Prior to the war started, Russia started to promote a very false story about what is going on in Ukraine. There was the argument about an ongoing Nazification of Ukraine. There was another argument about a genocide of the Russian minorities in Ukraine that was supposed to take place. And there was also a narrative about how Ukraine had become a tool in the toolbox of the Western alliance that is NATO or for the U.S. to exert its influence and how it was also used offensively against Russia. And these were of course all parts of the Russian information campaign – disinformation - with which it justified – legitimized - its war.

If we take a look at the information space in Europe or more broadly in Africa, for instance, today we see that the Western narrative about the real causes of the war, how Russia violated international law, the integrity and sovereignty of Ukraine, how this kind of real fact-based narrative is not doing that well. This proves the strength of foreign influence operations when they are strategic, well-planned, and of course when they are used by actors such as Russia and China that tend to cooperate and also outside the war zone, China is using this Russian narrative to put the blame for the war on the West and present itself as a reliable international actor.

So there were many elements in the war, not only the military activity, but I would in particular want to emphasize the role of these information operations.

Ali Wyne: It's sobering not only thinking about the impact of these disinformation operations, these foreign influence operations, but also, Teija, you mentioned the ways in which disinformation actors are learning from one another and I imagine that that trend is going to grow even more pronounced in the years and decades ahead. So thank you for that answer. Clint, from a technical perspective, what goes into recognizing information operations? What goes into investigating information operations and ultimately identifying who's responsible?

Clint Watts: I think one of the ironies of our work is, from a technical perspective, influence operations are detected by people. One of the big differences, especially we work with MSTIC, the cyber team and our team, is that our work of detecting malign influence operations is really about a human problem powered by technology. And that if you want to be able to understand and get your lead, we work more like a newspaper in many ways. We have a beat that we're covering, let's say, it's Russian influence operations in Africa. And we have real humans, people with master's degrees speak the language. I think the team speaks 13 languages in total amongst 28 of us. They sit and they watch and they get enmeshed in those communities and watch the discussions that are going on.

But ultimately we're using some technical skills, some data science to pick up those trends and patterns because the one thing that's true of influence operations across the board is you cannot influence and hide forever. Ultimately your position or, as Teija said, your narratives will track back to the authoritarian country - Russia, Iran, China - and what they're trying to achieve. And there's always tells common tells or context, words, phrases, sentences are used out of context. And then you can also look at the technical perspective. Nine years ago when we came onto the Russians, the number one technical indicator of Russian accounts was Moscow time versus U.S. time.

Ali Wyne: Interesting.

Clint Watts: They worked in shifts. They were posing as Americans, but talking at 2:00 AM mostly about Syria. And so it stuck out, right? That was a contextual thing. You move though from those human tips and insights, almost like a beat reporter though, to using technical tools. That's where we dive in. So that's everything from understanding associations of time zones, how different batches of accounts and social media might work in synchronization together, how they'll change from topic time and time again. The Russians are a classic of wanting to talk about Venezuela one day, Cuba the next, Syria the third day, the U.S. election the fourth, right? They move in sequence.

And so I think when we're watching people and training them, when they first come on board, it's always interesting. We try and pair them up in teams of three or four with a mix of skills. We have a very interdisciplinary set of teams. One will be very good in terms of understanding cybersecurity and technical aspects. Another one, a data scientist. All of them can speak a language and ultimately one is a former journalist or an international relations student that really understands the region and it's that team environment working together in person that really allows us to do that detection but then use more technical tools to do the attribution.

Ali Wyne: So You talked about identifying and attributing foreign influence operations and that's one matter, but how do you actually combat them? How do you combat those operations and what role, if any, can the technology industry play in combating them?

Clint Watts: So we're at a key spot, I think, in Microsoft in protecting the information environment because we do understand the technical signatures much better than any one government could probably do or should. There are lots of privacy considerations, we take it very serious at Microsoft, about maintaining customer privacy. At the same point, the role that tech can do is illustrative in some of our recent investigations, one of them where we found more than 30 websites, which were being run out of the same three IP addresses and they were all sharing content pushed from Beijing, but to local environments and local communities don't have any idea that those are actually Chinese state-sponsored websites.

So what we can do being part of the tech industry is confirm from a technical perspective that all of these things and all of this activity is linked together. I think that's particularly powerful. Also in terms of the cyber and influence convergence, we would say, we can see a leak operation where an elected official in one country is targeted as part of a foreign cyberattack for a hack and leak operation. We can see where the hack occurred. If we have good attribution on it, Russia and Iran in particular, we have very strong attribution on that and publish on it frequently, but then we can match that up with the leaks that we see coming out and where they come out from. And usually the first person to leak the information is in bed with the hacker that got the information. So that's another role that tech can play in particular about awareness of who the actors are, but what the connections are between one influence operation and one cyberattack and how that can change people's perspectives, let's say, going into an election.

Ali Wyne: Teija, I want to come back to you to ask you about a dilemma that I suspect that you and your colleagues and everyone who's operating in this space, a dilemma that I think many people are grappling with. And I want to put the question to you. I think that one of the central tensions in combating disinformation of course is preserving free speech in the nations where it exists. How should democracies approach that balancing act?

Teija Tiilikainen: This is a very good question and I think what we should keep in mind is what the malign actors are striving at is that they would like to see us starting to compromise our values, question our own values. So open society, freedom of speech, rule of law, democratic practices, and the principle of democracy, we should stick to our values, we should make sure that we are not going to the direction where the malign actors would want to steer us. But it is exactly as you formulate the question, how do we make sure that these values are not exploited against our broad societal security as is happening right now?

So of course there is not one single solution. The technological solution certainly can help us protect our society, broad awareness in society about these types of threats. Media literacies is the kind of keyword many times mentioned in this context. A totally new approach to the information space is needed and can be achieved through education, study programs, but also by supporting the quality media and the kind of media that relies on journalistic ethics. So we must make sure that our information environment is solid and that also in the future we'll have the possibility to make a distinction between disinformation and facts because it is - distinction is getting very blurred in a situation where there is a competition about narratives going on. Information has become a tool in many different conflicts that we have in the international space, but also in the domestic level many times.

I would like to offer the center's model because it's not only that we need cooperation between private actors, companies, civil society actors and governmental actors in states. We also need firm cooperation among like-minded states, sharing of best practices, learning. We can also learn from each other. If the malign actors do that, we should also take that model into use when it comes to questions such as how to counter, how to build resilience, what are the solutions we have created in our different societies? And this is why our center of excellence has been established exactly to provide a platform for that sharing of best practices and learning from each other.

So it is a very complicated environment in terms of our security and our resilience. So we need a multiple package of tools to protect ourselves, but I still want to stress the - our values and the very fact that this is what the malign actors would like to challenge and want us to challenge as well. So, let's stick to them.

Ali Wyne: Clint, let me come back to you. So we are heading into an electorally very consequential year. And perhaps I'm understating it, 2024 is going to be a huge election year, not only for the United States but also for many other countries where folks will be going to polls for the first time in the age of generative artificial intelligence. Does that fact concern you and how is artificial intelligence changing this game overall?

Clint Watts: Yeah, so I think it's too early for me to say as strange as it is. I remind our team, I didn't know what ChatGPT was a year ago, so I don't know that we know what AI will even be able to do a year from now.

Ali Wyne: Fair point, fair point.

Clint Watts: In the last two weeks, I've seen or experimented with so many different AI tools that I just don't know the impact yet. I need to think it through and watch a little bit more in terms of where things are going with it. But there are a few notes that I would say about elections and deepfakes or generative AI.

Since the invasion of Ukraine, we have seen very sophisticated fakes of both Zelensky and Putin and they haven't worked. Crowds, when they see those videos, they're pretty smart collectively about saying, "Oh, I've seen that background before. I've seen that face before. I know that person isn't where they're being staged at right now." So I think that is the importance of setting.

Public versus private I think is where we'll see harms in terms of AI. When people are alone and AI is used against them, let's say, a deepfake audio for a wire transfer, we're already seeing the damages of that, that's quite concerning. So I think from an election standpoint, you can start to look for it and what are some natural worries? Robocalls to me would be more worrisome really than a deepfake video that we tend to think about.

The other things about AI that I don't think get enough examination, at least from a media perspective, everyone thinks they'll see a politician say something. Your opening clip is example of it and it will fool audiences in a very dramatic way. But the powers of AI in terms of utility for influence operations is mostly about understanding audiences or be able to connect with an audience with a message and a messenger that is appropriate for that audience. And by that I mean creating messages that make more sense.

Part of the challenge for Russia and China in particular is always context. How do you look like an American or a European in their country? Well, you have to be able to speak the language well. That's one thing AI can help you with. Two, you have to look like the target audience to some degree. So you could make messengers now. But I think the bigger part is understanding the context and timing and making it seem appropriate. And those are all things where I think AI can be an advantage.

I would also note that here at Microsoft, my philosophy with the team is machines are good at detecting machines and people are good at detecting people. And so there are a lot of AI tools we're already using in cybersecurity, for example, with our copilots where we're using AI to detect AI and it's moving very quickly. As much as there's escalation on the AI side, there's also escalation on the defensive side. I'm just not sure that we've even seen all the tools that will be used one year from now.

Ali Wyne: Teija, let me just ask you about artificial intelligence more broadly. Do you think that it can be both a tool for combating disinformation and a weapon for promulgating disinformation? How do you view artificial intelligence broadly when it comes to the disinformation challenge?

Teija Tiilikainen: I see a lot of risks. I do see also possibilities and artificial intelligence certainly can be used as resilience tools. But the question is more about who is faster if the malign actors take the full advantage of the AI before we find the loopholes and possible vulnerabilities. I think it's very much about a hardcore question to our democracies. The day when an external actor can interfere efficiently into our democratic processes, to elections, to election campaigns, the very day when we cannot any longer be sure that what is happening in that framework is domestically driven, that they will be very dangerous for the whole democratic model, the whole functioning of our Western democracies.

And we are approaching the day and AI is, as Clint explained, one possible tool for malign actors who want to discredit not only the model, but also interfere into the democratic processes, affect outcomes of elections, topics of elections. So deepfakes and all the solutions that use AI, they are so much more efficient, so much faster, they are able to use so much - lots of data.

So I see unlimited possibilities unfortunately for the use of AI for malign purposes. So this is what we should focus on today when we focus on resilience and the resilience of our digital systems.

And this is also a highly unregulated field also at the international level. So if we think about weapons, if we think about military force, well, now we are in a situation of deep conflict, but before we were there we used to have agreements and treaties and conventions between states that regulated the use of weapons. Those agreements are no longer in a very good shape. But what do we have in the realm of cyber? This is at the international level a highly unregulated field. So there are many problems. So can only encourage and stress the need to identify the risks with these solutions. And of course we need to have regulation of AI solutions and systems in our states at the state level as well as hopefully at some point also international agreements concerning the use of AI.

Ali Wyne: I want to close by emphasizing that human component and ask you, as we look ahead and as we think about ways in which governments and private sector actors, individuals, and others in this ecosystem can be more effective at combating disinformation foreign influence operations, what kinds of societal changes need to happen to neutralize the impact of these operations? So talk to us a little bit more about the human element of this challenge and what kinds of changes need to happen at the societal level.

Teija Tiilikainen: I would say that we need a cultural change. We need to understand societal security very differently. We need to understand the risks and threats against societal security in a different way. And this is about education. This is about schools, this is about study programs at universities. This is about openness in media, about risks and threats.

But also in those countries that do not have the tradition. In the Nordic countries, here in Finland, in Scandinavia, we have a firm tradition of public-private cooperation when it comes to security policy. We are small nations and the geopolitical region has been unstable for a long time. So there is a need for public and private actors to share a same understanding of security threats and also cooperate to find common solutions. And I think I can only stress the importance of public-private cooperation in this environment.

We need more systematical forms of resilience. We have to ask ourselves what does resilience mean? Where do we start building resilience? Which are all the necessary components of resilience that we need to take into account? So that we have international elements, we have national elements, local elements, we have governmental and civil society parts, and they are all interlinked. There is no safe space anywhere. We need to kind of create comprehensive solutions that cover all possible vulnerabilities. So I would say that the security culture needs to be changed and it's not the security culture we tend to think about domestic threats and then international threats. Now they are part of the same picture. We tended to think about military, nonmilitary, also they are very much interlinked in this new technological environment. So new types of thinking, new types of culture. I would like to get back to university schools and try to engage experts to think about the components of this new culture.

Ali Wyne: Teija Tiilikainen, Director of the European Center of Excellence for Countering Hybrid Threats. Clint Watts, General Manager of Microsoft Threat Analysis Center. Teija, Clint, thank you both very much for being here.

Teija Tiilikainen: Thank you. It was a pleasure. Thank you.

Clint Watts: Thanks for having me.

Ali Wyne: And that's it for this episode of Patching the System. There are more to come. So follow Ian Bremmer's GZERO World feed anywhere you get your podcasts to hear the rest of our new season. I'm Ali Wyne. Thank you very much for listening.

Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.

Previous Page

More from Global Stage

Can we use AI to secure the world's digital future?

How do we ensure AI is safe, available to everyone, and enhancing productivity? It’s a big topic at this year’s UN General Assembly. That’s why GZERO’s Global Stage livestream brought together leading experts at the heart of the action for “Live from the United Nations: Securing our Digital Future,” an event produced in partnership between the Complex Risk Analytics Fund, or CRAF’d, and GZERO Media’s Global Stage series, sponsored by Microsoft.

Nobelist Oleksandra Matviichuk on Russia-Ukraine war reshaping world order

"Everything which we call normal life was ruined," said Ukrainian Oleksandra Matviichuk, head of the Center for Civil Liberties and 2022 Nobel Peace Prize laureate, during a GZERO Global Stage discussion at the 7th annual Paris Peace Forum.

Protecting science from rising populism is critical, says UNESCO's Gabriela Ramos

In a GZERO Global Stage discussion at the 7th annual Paris Peace Forum, Gabriela Ramos, Assistant Director-General for Social and Human Sciences at UNESCO, highlighted the crucial role of science in fostering peace and expressed concerns over rising populism undermining scientific efforts.

How to protect elections in the age of AI

GZERO Media, on the ground at the 2024 Munich Security Conference, held a Global Stage discussion on Feb. 17 entitled “Protecting Elections in the Age of AI.” We spoke with Brad Smith, vice chair and president of Microsoft; Ian Bremmer, president and founder of Eurasia Group and GZERO Media; Fiona Hill, senior fellow for the Center on the United States and Europe at Brookings; Eva Maydell, an EU parliamentarian and a lead negotiator of the EU Chips Act and Artificial Intelligence Act; Kersti Kaljulaid, the former president of Estonia; with European correspondent Maria Tadeo moderating. These thought leaders and experts discussed the implications of the rapid rise of AI amid this historic election year.

Rebuilding post-election trust in the age of AI

In a GZERO Global Stage discussion at the 7th annual Paris Peace Forum, Teresa Hutson, Corporate Vice President at Microsoft, reflected on the anticipated impact of generative AI and deepfakes on global elections. Despite widespread concerns, she noted that deepfakes did not significantly alter electoral outcomes. Instead, Hutson highlighted a more subtle effect: the erosion of public trust in online information, a phenomenon she referred to as the "liar's dividend."

The challenges of peacekeeping amid rising global conflicts

In a GZERO Global Stage discussion at the 7th annual Paris Peace Forum, Dr. Comfort Ero, President and CEO of the International Crisis Group, shed light on the increasing elusiveness of global peace amid rising conflicts worldwide. She pointed out a "crisis of peacemaking," noting that comprehensive peace processes and settlements have become rare, with the last significant one being in Colombia in 2016.