Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Why the UN's 17 Sustainable Development Goals are not on track to be financed soon
The world faces a sustainable development crisis, and while most countries have strategies in place, they don’t have the cash to back them up. How far off track are we with the financing needed to support the UN’s 17 Sustainable Development Goals, ranging from quality education and health care to climate action and clean water?
Shari Spiegel, who runs the UN’s Financing for Sustainable Development Office, sat down with GZERO’s Tony Maciulis at a Global Stage event for the IMF-World Bank Spring Meetings this week. She explains that the SDGs were off track even before the pandemic and that now, owing to global crises, many poorer countries have slipped backwards.
“We actually started backtracking on many of these goals as countries were under enormous stress, and particularly the poorest countries,” she said, noting that the global output of many of the poorest nations has fallen by 30% — and some, such as the Small Island Developing States, by 40%. This has led to an enormous finance divide — raising SDG financing and investment gaps from $2 trillion a few years ago to around $4 trillion today.
So how can the UN restrengthen multilateralism and, in turn, help narrow this gap? Watch here.
For more of our 2024 IMF/World Bank Spring Meetings coverage, visit Global Stage.
It’s not just Baltimore with a bridge problem
Six people are still missing after a cargo ship collided with the Francis Scott Key Bridge in Baltimore on Tuesday morning, crumbling the 1.6 mile long bridge in a matter of seconds.
Its aftermath will make waves far beyond Baltimore. Here’s how.
Supersized shipping. The 964 foot container ship, MV Dali, was being steered by a pilot specialized in the port when it lost power and hit one of the bridge’s main pylons.
The collapse draws attention to a pressing problem in the US supply chain: cargo ships – central to the modern economy – have gotten exponentially bigger as US bridges are aging.
When the Francis Scott Key Bridge was built between 1972 and 1977 the average container ship carried between 500-800 twenty-foot shipping containers – known as TEUs in the shipping business. But advances in engineering allowed ships to balloon to an average of 4,000 TEUs by 1985. Since then, carriers have continuously scaled up capacity, and Dali, manufactured in 2015, had a capacity of 10,000 TEUs. According to bridge experts, no bridge pylon could survive being hit by a vessel of this size.
The continuous growth has pitted ports against each other to attract bigger vessels. The expansion of the Panama Canal in 2016 upped the stakes, with ports along the East Coast racing to dredge their harbors to accommodate the larger ships now traveling through the canal.
The Port of Baltimore expanded to accommodate supersized ships in 2013. Since, it has grown into the 9th-busiest port for receiving foreign cargo.
Aging bridges. Meanwhile, the Francis Scott Key bridge has remained largely unchanged since the 1970s.
Although Maryland’s Governor Wes Moore said the bridge was “fully up to code,” the bridge scored a six out of nine safety rating on its last federal inspection in 2022. The bridge was downgraded because of concern about its reinforcement columns, though experts said Dali’s size and the way it hit the pylon made collapse inevitable.
The Francis Scott Key Bridge is not alone. Of the 617,000 bridges across the United States, 42% are at least 50 years old. About 300,000 were given the same “fair” safety grade given to Key Bridge, while another 42,000 were in “poor” condition. Nationwide, 178 million trips are taken across structurally deficient bridges every day.
This is a global problem. Tuesday’s crash was the second time this month that a container ship hit a major road bridge. On Feb. 22 in Guangzhou, a port in southern China, a much smaller vessel carrying stacks of containers hit the base of a two-lane bridge, causing vehicles to fall. From 1960 to 2015, there were 35 major bridge collapses worldwide due to ship or barge collisions. 18 of those collapses happened in the United States.
The political fallout. The Biden administration has said that the federal government will pay the entire cost of reconstructing the Port of Baltimore.
President Joe Biden’s generosity is likely to fend off Republican accusations that the collapse occurred because his administration has not spent enough on infrastructure -- an issue that could become more important in the 2024 election in the aftermath of this tragedy, as well as the collapse of an interstate overpass in Philadelphia last summer.
It is unclear how much reconstruction will cost, but to actually get the country’s bridges up to snuff, the American Road & Transportation Builders Association estimates Biden would have to spend $171 billion. At the current rate of investment, it will take nearly 75 years to repair them all.
Of Biden’s $1.2 trillion Infrastructure Investment and Jobs Act, just $110 billion went to roads and bridges. This spending helped more than 43,000 bridges improve their ratings under the Biden administration. But because entropy is inescapable in a country as large as the US, during the same time, nearly 70,000 bridges deteriorated.
Podcast: Can governments protect us from dangerous software bugs?
Listen: We've probably all felt the slight annoyance at prompts we receive to update our devices. But these updates deliver vital patches to our software, protecting us from bad actors. Governments around the world are increasingly interested in monitoring when dangerous bugs are discovered as a means to protect citizens. But would such regulation have the intended effect?
In season 2, episode 5 of Patching the System, we focus on the international system of bringing peace and security online. In this episode, we look at how software vulnerabilities are discovered and reported, what government regulators can and can't do, and the strength of a coordinated disclosure process, among other solutions.
Our participants are:
- Dustin Childs, Head of Threat Awareness at the Zero Day Initiative at Trend Micro
- Serge Droz from the Forum of Incident Response and Security Teams (FIRST)
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
TRANSCRIPT: Can governments protect us from dangerous software bugs?
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
DUSTIN CHILDS: The industry needs to do better than what they have been doing in the past, but it's never going to be a situation where they ship perfect code, at least not with our current way of developing software.
SERGE DROZ: I think the job of the government is to create an environment in which responsible vulnerability disclosure is actually possible and is also something that's desirable.
ALI WYNE: If you've ever gotten a notification pop up on your phone or computer saying that an update is urgently needed, you've probably felt that twinge of inconvenience at having to wait for a download or restart your device. But what you might not always think about is that these software updates can also deliver patches to your system, a process that is in fact where this podcast series gets its name.
Today, we'll talk about vulnerabilities that we all face in a world of increasing interconnectedness.
Welcome to Patching the System, a special podcast from the Global Stage Series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from more than 150 global technology companies dedicated to creating a safer cyber world for all of us.
And about those vulnerabilities that I mentioned before, we're talking specifically about the vulnerabilities in the wide range of IT products that we use, which can be entry points for malicious actors. And governments around the world are increasingly interested in knowing about these software vulnerabilities when they're discovered.
Since 2021 for example, China has required that anytime such software vulnerabilities are discovered, they first be reported to a government ministry even before the company that makes a technology is alerted to the issue. In the European Union, less stringent, but similar legislation is pending, that would require companies that discover that a software vulnerability has been exploited to report the information to government agencies within 24 hours and also provide information on any mitigation use to correct the issue.
These policy trends have raised concerns from technology companies and incident responders that such policies could actually undermine security.
Joining us today to delve into these trends and explain why are Dustin Childs, Head of Threat Awareness at the Zero Day Initiative at Trend Micro, a cybersecurity firm base in Japan, and Serge Droz from the Forum of Incident Response and Security Teams, AKA First, a community of IT security teams that respond when there's a major cyber crisis. Dustin, Serge, welcome to you both.
DUSTIN CHILDS: Hello. Thanks for having me.
SERGE DROZ: Hi. Thanks for having me.
ALI WYNE: It's great to be talking with both of you today. Dustin, let me kick off the conversation with you. And I tried in my introductory remarks to give listeners a quick glimpse as to what it is that we're talking about here, but give us some more detail. What exactly do we mean by vulnerabilities in this context and where did they originate?
DUSTIN CHILDS: Well, vulnerability, really when you break it down, it's a flaw in software that could allow a threat actor to potentially compromise a target, and that's a fancy way of saying it's a bug. They originate in humans because humans are imperfect and they make imperfect code, so there's no software in the world that is completely bug free, at least none that we've been able to generate so far. So every product, every program given enough time and resources can be compromised because they all have bugs, they all have vulnerabilities in them. Now, vulnerability doesn't necessarily mean that it can be exploited, but a vulnerability is something within a piece of software that potentially can be exploited by a threat actor, a bad guy.
ALI WYNE: And Serge, when we're talking about the stakes here, obviously vulnerabilities can create cracks in the foundation that lead to cybersecurity incidents or attacks. What does it take for a software vulnerability to become weaponized?
SERGE DROZ: Well, that really depends on the particular vulnerability. A couple of years ago, there was a vulnerability that was really super easy to exploit: log4j. It was something that everybody could do in an afternoon, and that of course, is a really big risk. If something like that gets public before it's fixed, we really have a big problem. Other vulnerabilities are much harder to exploit also because software vendors, in particular operating system vendors have invested a great deal in making it hard to exploit vulnerabilities on their systems. The easy ones are getting rarer, mostly because operating system companies are building countermeasures that makes it hard to exploit these. Others are a lot harder and need specialists, and that's why they fetch such a high price. So there is no general answer, but the trend is it's getting harder, which is a good thing.
ALI WYNE: And Dustin, let me come back to you then. So who might discover these vulnerabilities first and what kinds of phenomena make them more likely to become a major security risk? And give us a sense of the timeline between when a vulnerability is discovered and when a so-called bad actor can actually start exploiting it in a serious way.
DUSTIN CHILDS: The people who are discovering these are across the board. They're everyone from lone researchers just looking at things to nation states, really reverse engineering programs for their own purposes. So a lot of different people are looking at bugs, and it could be you just stumble across it too and it's like, "Oh, hey. Look, it's a bug. I should report this."
So there's a lot of different people who are finding bugs. Not all of them are monetizing their research. Some people just report it. Some people will find a bug and want to get paid in one way or another, and that's what I do, is I help them with that.
But then once it gets reported, depending on what industry you're in, it's usually like 120 days to up to a year until it gets fixed from the vendor. But if a threat actor finds it, they can weaponize it and it can be weaponized, they can do that within 48 hours. So even if a patch is available and that patch is well-known, the bad guys can take that patch and reverse engineer it and turn it into an exploit within 48 hours and start spreading. So within 30 days of a patch being made available, widespread exploitation is not uncommon if a bug can be exploited.
ALI WYNE: Wow. So 48 hours, that doesn't give folks much time to respond, but thank you, Dustin, for giving us that number. I think we now have at least some sense of the problem, the scale of the problem, and we'll talk about prevention and solutions in a bit. But first, Serge, I want to come back to you. I want to go into some more detail about the reporting process. What are the best practices in terms of reporting these vulnerabilities that we've been discussing today? I mean, suppose if I were to discover a software vulnerability for example, what should I do?
SERGE DROZ: This is a really good question, and there's still a lot of ongoing debate, even though the principles are actually quite clear. If you find a vulnerability, your first step should be to actually start informing confidentially the vendor, whoever is responsible for the software product.
But that actually sounds easier than it is because quite often it's maybe hard to talk to a vendor. There's still some companies out there that don't talk to ‘hackers,’ in inverted commas. That's really bad practice. In this case, I recommend that you contact a national agency that you trust that can mediate in between you, and that's all fairly easy to do if it's just between you and another party, but then you have a lot of vulnerabilities in products for no one is really responsible, take open source or products that actually are used in all the other products.
So we talking about supply chain issues and then things really become messy. And in these cases, I really recommend that people start working together with someone who's experienced in doing coordinated vulnerability disclosure. Quite often what happens is that within the industry affected organizations get together, they form a working group that silently starts mitigating this spec practices, that you give the vendor three months or more to actually be able to fix a bug because sometimes it's not that easy. What you really should not be doing is leaking any kind of information, like even saying, "Hey, I have found the vulnerability in product X," it may actually trigger someone to start looking at this. So this is really important that this remains a confidential process where very few people are involved.
ALI WYNE: So one popular method of uncovering these vulnerabilities that we've been discussing, it involves, so-called bug bounty programs. What are bug bounty programs? Are they a good tool for catching and reporting these vulnerabilities, and then moving beyond bug bounty programs, are there other tools that work when it comes to reporting vulnerabilities?
SERGE DROZ: Bug bounty programs are just one of the tools we have in our tool chest to actually find vulnerabilities. The idea behind a bounty program is that you have a lot of researchers that actually poke at code just because they may be interested, and at the company or a producer of software, you offer them a bounty, some money. If they report a vulnerability responsibly, you pay them some money usually depending on how severe or how dangerous the vulnerability is and encourage good behavior this way. I think it's a really great way because it actually creates a lot of diversity. Typically, bug bounty programs attract a lot of different types of researchers. So we have different ways of looking at your code and that often discovers vulnerabilities that no one has ever thought of because no one really had that way of thought, so I think it's a really good thing.
It also awards people that responsibly disclose and don't just sell it to the highest bidder because we do have companies out there that buy vulnerabilities that then end up in some strange gray market, exactly what we don't want, so I think that's a really good thing. Bug bounty programs are complimentary to what we call penetration testing, where you hire a company that for money, starts looking at your software. There's no guarantee that they find a bug, but they usually have a systematic way of going over this and you have an agreement. As I said, I don't think there's a single silver bullet, a single way to make this, but I think this is a great way to actually also reward this. And some of the bug bounty researchers make a lot of money. They actually make a living of that. If you're really good, you can make a decent amount of money.
DUSTIN CHILDS: Yeah, and let me just add on to that as someone who runs a bug bounty program. There are a couple of different types of bug bounty programs too, and the most common one is the vendor specific one. So Microsoft buys Microsoft bugs, Apple buys Apple bugs, Google buys Google bugs. Then there's the ones that are like us. We're vendor-agnostic. We buy Microsoft and Apple and Google and Dell and everything else pretty much in between.
And one of the biggest things that we do as a vendor-agnostic program is an individual researcher might not have a lot of sway when they contact a big vendor like a Microsoft or a Google, but if they come through a program like ours or other vendor-agnostic programs out there, they know that they have the weight of the Zero Day Initiative or that program behind it, so when the vendor receives that report, they know it's already been vetted by a program and it's already been looked at. So it's a little bit like giving them a big brother that they can take to the schoolyard and say, "Show me where the software hurt you," and then we can help step in for that.
ALI WYNE: And Dustin, you've told us what bug bounty programs are. Why would someone want to participate in that program?
DUSTIN CHILDS: Well, researchers have a lot of different motivations, whether it's curiosity or just trying to get stuff fixed, but it turns out money is a very big motivator pretty much across the spectrum. We all have bills to pay, and a bug bounty program is a way to get something fixed and earn potentially a large amount of money depending on the type of bug that you have. The bugs I deal with range anywhere between $150 on the very low end, up to $15 million for the most severe zero click iPhone exploits being purchased by government type of thing, so there's all points in between too. So it's potentially lucrative if you find the right types of bugs, and we do have people who are exclusively bug hunters throughout the year and they make a pretty good living at it.
ALI WYNE: Duly noted. So maybe I'm playing a little bit of a devil's advocate here, but if vulnerabilities, these cyber vulnerabilities, if they usually arise from errors in code or other technology mistakes from companies, aren't they principally a matter of industry responsibility? And wouldn't the best prevention just be to regulate software development more tightly and avoid these mistakes from getting out into the world in the first place?
DUSTIN CHILDS: Oh, you used the R word. Regulation, that's a big word in this industry. So obviously it's less expensive to fix bugs in software before it ships than after it ships. So yes, obviously it's better to fix these bugs before they reach the public. However, that's not really realistic because like I said, every software has bugs and you could spend a lifetime testing and testing and testing and never root them all out and then never ship a product. So the industry right now is definitely looking to ship product. Can they do a better job? I certainly think they can. I spent a lot of money buying bugs and some of them I'm like, "Ooh, that's a silly bug that should never have left wherever shipped at." So absolutely, the industry needs to do better than what they have been doing in the past, but it's never going to be a situation where they ship perfect code, at least not with our current way of developing software.
ALI WYNE: Obviously there isn't any silver bullet when it comes to managing these vulnerabilities, disclosing these vulnerabilities. So assuming that we probably can't eliminate all of them, how should organizations deal with fixing these issues when they're discovered? And is there some kind of coordinated vulnerability disclosure process that organizations should follow?
DUSTIN CHILDS: There is a coordinated disclosure process. I mean, I've been in this industry for 25 years and dealing with vulnerability disclosures since 2008 personally, so this is a well-known process where you report to it. As an industry if you're developing software, one of the most important things you can do is make sure you have a contact. If someone finds a bug in your program, who do they email? The more established programs like Microsoft and Apple and Google, it's very clear if you find a bug there who you're supposed to email and what you're supposed to do with it. One of the problems we have as a bug bounty program is if we purchase a bug in a lesser known piece of software, sometimes it's hard for us to hunt down who actually is responsible for maintaining it and updating it.
We've even had to go on to Twitter and LinkedIn to try and hunt down some people to respond to an email to say, "Hey, we've got a bug in your program," so that's one of the biggest things you can do is just be aware that somebody could report a bug to you. And as a consumer of the product, however, you need a patch management program. So you can't just rely on automatic updates. You can't just rely on things happening automatically or easily. You need to understand first what is in your environment, so you have to be ruthless in your asset discovery, and I do use the word ruthless there intentionally. You've got to know what is in your enterprise to be able to defend it, and then you've got to have a plan for managing it and patching it. That's a lot easier said than done, especially in a modern enterprise where not only do you have desktops and laptops, you've got IT devices, you've got IOT devices, you've got thermostats, you've got update, you've got little screens everywhere that need updating and they all have to be included in that patch management process.
ALI WYNE: Serge, when it comes to triaging vulnerabilities, it doesn't sound like there's a large need for government participation. So what are some of the reasons legitimate and maybe less than legitimate why governments might increasingly want to be notified about vulnerabilities even before patches are available? What are their motivations?
SERGE DROZ: So I think there are several different motivations that governments are getting increasingly fed up with these kind of excuses that our industry, the software industry makes about how hard it is to avoid software vulnerabilities, all the reasons and excuses we bring and for not doing our jobs. And frankly, as Dustin said, we could be doing better. Governments just want to know so they can actually give out the message that, "Hey, we're watching you and we want to make sure you do your job." Personally, I'm not really convinced this is going to work. So that will be mostly the legitimate reasons why the governments want to know about vulnerabilities. I think it's fair that the government knows or learns about the vulnerability after the fact, just to get an idea of what the risk is for the entire industry. Personally, I feel it should only be the parties that need to know should know it during the responsible disclosure.
And then of course, there's governments that like vulnerabilities because they can abuse it themselves. I mean, governments are known to exploit vulnerabilities through their favorite three letter agencies. That's actually quite legitimate for governments to do. It's not illegal for governments to do this type of work, but of course, as a consumer or as an end user, I don't like this, I don't want products that have vulnerabilities that are exploited. And personally from a civil society point of view, there's just too much risk with this being out there. So my advice really is the fewer people, the few organizations know about a vulnerability the better.
DUSTIN CHILDS: What we've been talking about a lot so far is what we call coordinated disclosure, where the researcher and the vendor coordinate a response. When you start talking about governments though, you start talking about non-disclosure, and that's when people hold onto these bugs and don't report them to the vendor at all, and the reason they do that is so that they can use them exclusively. So that is one reason why governments hold onto these bugs and want to be notified is so that they have a chance to use them against their adversaries or against their own population before anyone else can use them or even before it gets fixed.
ALI WYNE: So the Cybersecurity Tech Accord had recently released a statement opposing the kinds of reporting requirements we've been discussing. From an industry perspective, what are the concerns when it comes to reporting on vulnerabilities to governments?
DUSTIN CHILDS: Really the biggest concern is making sure that we all have an equitable chance to get it fixed before it gets used. If a single government starts using vulnerabilities to exploit for their own personal gain, for whatever, that puts the rest of the world at a disadvantage, and that's the rest of the world, their allies as well as their opponents. So we want to do coordinated disclosure. We want to get the bugs fixed in a timely manner, and keeping them to themselves really discourages that. It discourages finding bugs, it discourages reporting bugs. It really discourages from vendors from fixing bugs too, because if the vendors know that the governments are just going to be using these bugs, they might get a phone call from their friendly neighborhood three letter and say, "You know what? Hold off on fixing that for a while." Again, it just puts us all at risk, and we saw this with Stuxnet.
Stuxnet was a tool that was developed by governments targeting another government. It was targeting Iranian nuclear facilities, and it did do damage to Iranian nuclear facilities, but it also did a lot of collateral damage throughout Europe as well, and that's what we're trying to avoid. It's like if it's a government on government thing, great, that's what governments do, but we're trying to minimize the collateral damage from everyone else who was hurt by this, and there really were a lot of other places that were impacted negatively from the Stuxnet virus.
ALI WYNE: And Serge, what would you say to someone who might respond to the concerns that Dustin has raised by saying, "Well, my government is advanced and capable enough to handle information about vulnerabilities responsibly and securely, so there's no issue or added risk in reporting to them." What would you say to that individual?
SERGE DROZ: The point is that there are certain things that really you only deal on a need to know basis. That's something that governments actually do know. Governments when they deal with confidential or critical information, it's always on the need to know. They don't tell this to every government employee even though they're, of course, are loyal. It makes the risk of this leaking even if the government doesn't have any ill intent bigger, so there's just no need the same way there is no need that all the other a hundred thousand security researchers need to know about this. So I think as long as you cannot contribute constructively to mitigating this vulnerability, you should not be part of that process.
Having said that, though, there is some governments that actually have really tried hard to help researchers making contact with vendors. Some researchers are afraid to report vulnerabilities because they feel they're going to become under pressure or stuff like this. So if a government wants to take that role and can or can't create enough trust that researchers trust them, I don't really have a problem, but it should not be mandatory. Trust needs to be earned. You cannot legislate this, and every time you have to legislate something, I mean, come on, you legislate it because people don't trust you.
ALI WYNE: We spent some time talking about vulnerabilities, why they're a problem. We've discussed some effective and maybe some not so effective ways to prevent or manage them better. And I think the governments have a legitimate interest in knowing the companies are acting responsibly and that, that interest is the impetus behind some of the push, at least for more regulation and reporting. But what do each of you see sees other ways that governments could help ensure that companies are mitigating risks and protecting consumers as much as possible?
DUSTIN CHILDS: So one of the things that we're involved with here at the Zero Day Initiative is encouraging governments to allow safe harbor. And really what that means is researchers are safe in reporting vulnerabilities to a vendor without the legal threat of being sued or having other action taken against them so that as long as they are legitimately reporting a bug and not trying to steal or violate laws, as long as they're legitimate researchers trying to get something fixed, they're able to do that without facing legal consequences.
One of the biggest things that we do as a bug bounty program is just handle the communications between researchers and the vendors, and that is really where it can get very contentious. So to me, one of the things that governments can do to help is make sure that safe harbor is allowed so that the researchers know that, "I can report this vulnerability to this vendor without getting in touch with a lawyer first. I'm just here trying to get something fixed. Maybe I'm trying to get paid as well," so maybe there is some monetary value in it, but really they're just trying to get something fixed, and they're not trying to extort anyone. They're not trying to create havoc, they're just trying to get a bug fixed, and that safe harbor would be very valuable for them. That's one thing we're working on with our government contacts, and I think it's a very big thing for the industry to assume as well.
SERGE DROZ: Yes, I concur with Dustin. I think the job of the government is to create an environment in which responsible vulnerability disclosure is actually possible and is also something that's desirable, that also includes a regulatory framework that actually gets away from this blaming. I mean, writing software is hard, bugs appear. If you just constantly keep bashing people that they're not doing it right or you threaten them with liabilities, they're not going to talk to you about these types of things. So I think the job of the government is to encourage responsible behavior and to create an environment in that, and maybe there's always going to be a couple of black sheeps, and here maybe the role of the government is really to encourage them to play along and start offering vulnerability reporting programs. That's where I see the role of the government, creating good governance to actually enable responsible vulnerabilities disclosure.
ALI WYNE: Dustin Childs, Head of Threat Awareness at the Zero Day Initiative at Trend Micro, a cybersecurity firm base in Japan. And Serge Droz from the Forum of Incident Response and Security Teams, a community of IT security teams that respond when there is a major cyber crisis. Dustin, Serge, thanks very much for joining me today.
DUSTIN CHILDS: You're very welcome. Thank you for having me.
SERGE DROZ: Yes, same here. It was a pleasure.
ALI WYNE: That's it for this episode of Patching the System. We have five episodes this season covering everything from cyber mercenaries to a cybercrime treaty. So follow Ian Bremmer's GZERO World feed anywhere you get your podcast to hear more. I'm Ali Wyne. Thanks very much for listening.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
What We're Watching: Pentagon leaker suspect arrested, Gershkovich swap chatter, Uruguay’s free trade ambitions
And the suspected leaker is ...
On Thursday afternoon, the FBI arrested a suspect in the most damaging US intel leak in a decade, identifying him as Jack Teixeira, a 21-year-old member of the Massachusetts Air National Guard. Teixeira was reportedly the leader of an online gaming chat group, where he had been allegedly sharing classified files for three years. If convicted of violating the US Espionage Act, he could spend the rest of his life behind bars. Teixeira will appear in a Boston court on Friday.
We know that the chat group was made up of mostly male twentysomethings that loved guns, racist online memes, and, of course, video games. We don’t know what motivated the leaks, what other classified material the leaker had, or whether any of the docs were divulged to a foreign intelligence agency.
Arresting a suspect, though, is just the beginning of damage control for the Pentagon and the Biden administration. Although the content of the leaks surprised few within the broader intel community, many might not have realized the extent to which the US spies on its allies.
Uncle Sam obviously would’ve preferred to have intercepted the message this scandal sends to America’s enemies: US intel is not 100% secure.
Russia is maybe considering swap for Evan Gershkovich
A top Russian diplomat suggested Thursday that Moscow could explore a prisoner swap with the US in order to release American journalist Evan Gershkovich, whom Russian authorities jailed earlier this month on espionage charges.
But first, said Deputy Foreign Minister Sergei Ryabkov, the trial against Gershkovich will have to play out in full. That could take as long as a year.
What might Russia want in exchange? Hard to say. Last year, the Kremlin swapped WNBA star Brittney Griner, convicted of a drug offense while traveling in Russia, for notorious arms dealer Viktor Bout. At the time, the Kremlin also reportedly sought the release of a Russian assassin from a German prison, but that swap broke down when the Kremlin refused to also release Paul Whelan, an American currently serving an espionage sentence in Russia.
A year from now, the world, and the Ukraine war, might look very different. But expect the Kremlin to throw the book at Gershkovich to maximize their leverage ahead of any talks about his release.
Meanwhile, elsewhere in Russia’s prison system, opposition leader Alexei Navalny — currently in solitary confinement — has suffered a fresh health crisis that his spokeswoman says is another attempt to poison him.
For context, see our recent interview with Daniel Roher, director of the Oscar-winning documentary Navalny.
Uruguay’s FTA dream
Uruguay's Foreign Minister Francisco Bustillo will soon meet with Chinese officials to take steps toward establishing a Free Trade Agreement between the two countries. Uruguay has wanted an FTA for three decades, and the timing might finally be right as China seeks to increase its influence in South America.
Getting an FTA with China has been a priority for Uruguay’s President Luis Lacalle Pou's administration. The meeting will come on the heels of trade talks between Brazil and China, countries that saw their two-way trade hit a record $171.5 billion in 2022. Uruguay wants in on the action.
China has deepened its trade relationships in Latin America throughout the 21st century, beating out the US as the region's largest trading partner. Beijing benefits politically from these partnerships, gaining votes at the UN and support for Chinese appointees to multinational institutions, as well as the ability to implement technology standards into regional infrastructure.
But not all of Uruguay's neighbors are comfortable with China's swelling influence in the region, or with Uruguay flying solo. Uruguay is facing resistance from other Mercosur countries that favor negotiating regional trade deals as a bloc. Paraguay, which still recognizes Taipei in lieu of the government in Beijing, is leading the pushback – a conflict that could test one of the bloc’s few rules: a restriction on making preferential agreements with third countries.
Untangling the global water crisis
Access to clean and drinkable water is a significant challenge all over the world. UN-Water Chair Gilbert Houngbo joins Ian Bremmer on GZERO World to shed light on the complexity of the issue, which he says is “a combination of bad governance and lack of resources.”
He stresses that water needs to become "everyone's business," and investment in water-related infrastructure is key. Houngbo points out that agriculture is responsible for “75% of water use,” so making it “climate-friendly” is a necessary step.
The situation in Yemen, where there is virtually no water access, highlights the challenges faced in addressing the problem. Houngbo notes that a multi-pronged approach that involves investment in infrastructure and technology is key – especially in areas like desalination. He acknowledges that desalination is expensive, and official development cooperation can play a role in addressing the issue.
Watch the GZERO World episode: The uncomfortable truth about water scarcity
Biden shifting to center ahead of 2024 reelection bid
Jon Lieber, head of Eurasia Group's coverage of political and policy developments in Washington, DC shares his perspective on US politics:
How are President Biden's reelection plans affecting his policies?
The 2024 presidential election is already heating up, with the Republican field growing more crowded by the week, and President Joe Biden angling for a reelection campaign, despite speculation about his advanced age. So far, Biden has only drawn one potential primary challenger, 2020 candidate Marianne Williamson, who he can likely ignore. And as of today, it looks very likely that he'll be the Democratic nominee, with an announcement of his campaign coming sometime this spring, perhaps as soon as April. After two years promoting progressive policies like student loan forgiveness and a massive climate and healthcare bill, Biden is now attacking to the center, with pivots to the center in three critical areas: crime, immigration, and spending.
On crime, the President recently announced his support for a Republican effort to block a local District of Columbia Bill, which will mark the first time in over 30 years that Congress overrode a local bill in the capital city. This has angered many of Biden's allies on the left who support independent statehood for DC, but a huge vote in the Senate will demonstrate the fear that Democrats have of being seen as soft on crime.
On immigration, though Biden started off his presidency with the slew of progressive immigration actions that drew praise from Democrats, after two years of rising encounters on the Southern border, and verbal criticism and legal challenges from the Republican Party, the Biden administration is trying to take a more centrist approach to immigration, combining new opportunities for immigration with increased border enforcement, including most controversially reimplementing the practice of detaining asylum seekers, asylum-seeking families, which has led to some outcry from Biden's allies on the left.
Finally, on the budget, Biden is pivoting from arguing that the US needs to be investing in infrastructure and social spending to a plan to control deficits through a combination of tax increases and spending cuts. This effort is mostly designed to make Republican proposals for balancing the budget look unreasonable, but also will allow Biden to stake out centrist territory as a fiscal hawk after spending a lot of money in his first two years.
Policy-wise, 2023 is going to be largely about setting the stage for 2024, as Congress remains gridlocked on most issues and Biden's strong signals that he will be a candidate for President next year, despite his advanced age, will continue to drive his attempts to appeal to the middle, confident that the progressive left will not abandon him because of their acute fear of the one thing that they want least of all; Republican rule.
Hard Numbers: Turkey/Syria quake death toll, Modi ally’s biz empire crumbles, West Bank violence, AMLO believes in elves
50,000: The death toll of the Feb. 6 Turkey/Syria earthquakes topped 50,000 on Sunday. Turkish President Recep Tayyip Erdoğan is feeling the heat over allegedly corrupt practices that led to so many collapsed buildings on his watch ahead of the May 14 election.
145 billion: The industrial empire of Indian billionaire Gautam Adani lost $145 billion — 60% of its value — in the month following allegations of fraud by Hindenburg Research, a US-based short seller, which Adani virulently denies. The Adani Group has faced years of corruption allegations, but it remains to be seen if the longtime ally of PM Narendra Modi is too big to fail.
2/1: Violence ensued in the West Bank on Sunday after a Palestinian gunman killed two Israeli settlers. That sparked a retaliatory rampage by settlers on the village of Hawara that killed at least one Palestinian, bringing the West Bank to boiling point.
7 million: Did someone leave the wardrobe open?! Mexico’s President Andrés Manuel López Obrador, known as AMLO, tweeted a photo of what he claims is an “aluxe”, a mischievous woodland spirit from Mayan folklore requiring gifts to appease it. The tweet had 7 million views as of Monday morning and is not out of character for AMLO, who has long revered indigenous beliefs and culture.Hard Numbers … after a year of war in Ukraine
300,000: Human losses on both sides of the conflict are mounting (and disputed), but there have been a whopping 300,000 military and civilian deaths on both sides, according to high-end estimates.
2.1 & 0.3: Russia’s economy contracted by just 2.1% last year, far less than predicted, due to continued sales of its discounted crude oil and adaptability. The IMF predicts a 0.3% growth rate for Russia this year thanks to high export prices.
51,000 vs. 40,600: Having seized roughly 51,000 square miles of Ukrainian land by late March last year, Russia has since lost roughly one-fifth of that. The Kremlin now controls about 40,600 square miles (17% of Ukraine), entirely in the south and east.
18 & 60: Russia’s invasion of Ukraine has decimated the country economically, with roughly 60% of Ukrainians now living below the poverty line, compared to 18% before the war.
35 & 139 billion: Ukraine’s GDP has diminished by 35%, and Russian targeted attacks are slamming the country’s infrastructure, having caused US$139 billion worth of damage (so far). Well over a third of the country is now dependent on humanitarian aid to live.
Up to 1 million: A reported 8,087,952 Ukrainian refugees are now spread across Europe, with close to 5 million seeking temporary asylum. Millions more are displaced within Ukraine. An estimated 500,000 to 1 million Russians have fled their homeland, driven by economic unrest, politics, and military mobilization.