Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
The geopolitics of AI
First is disinformation. We know that AI bots can be very confident and they're also frequently very wrong. And if you can no longer discern an AI bot from a human being in text and very soon in audio and in videos, then that means that you can no longer discern truth from falsehood. And that is not good news for democracies. It's actually good news for authoritarian countries that deploy artificial intelligence for their own political stability and benefit. But in a country like the United States or Canada or Europe or Japan, it's much more deeply corrosive. And I think that this is an area that unless we are able to put very clear labeling and restrictions on what is AI and what is not AI, we're going to be in very serious trouble in terms of the erosion of our institutions much faster than anything we've seen through social media or through cable news or through any of the other challenges that we've had in the information space.
Secondly, and relatedly, is proliferation. Proliferation of AI technologies by either bad actors or by tinkerers that don't have the knowledge and are indifferent to the chaos that they may sow. We today are in an environment with about a hundred human beings that have both the knowledge and the technology to deploy a smallpox virus. Don't do that, right? But very soon with AI, those numbers are going way up. And not just in terms of the creation of new dangerous viruses or lethal autonomous drones, but also in their ability to write malware and deploy it to take money from people or to destroy institutions or to undermine an election. All of these things in the hands not just of a small number of governments, but individuals that have a laptop and a little bit of programming skill is going to make it a lot harder to effectively respond. We saw some of this with the cyber, offensive cyber scare, which then of course created a lot of security and big industries around that to respond and lots of costs. That's what we're going to see with AI, but in every field.
Then you have the displacement risk. A lot of people have talked about this. It's a whole bunch of people that no longer have productive jobs because AI replaces them. I'm not particularly worried about this in the macro setting, in the sense that I believe that the number of jobs that will be created, new jobs, many of which we can't even think about right now, as well as the number of existing jobs that become much more productive because they are using AI effectively, will outweigh the jobs that are lost through artificial intelligence. But they're going to happen at the same time. And unless you have policies in place that help retrain and also economically just take care of the people that are displaced in the nearest-term, those people get angrier. Those people become much more supportive of anti-establishment politicians. They become much angrier and feel like their existing political leaders are illegitimate. We've seen this through free trade and hollowing out of middle classes. We've seen it through automation and robotics. It's going to be a lot faster, a lot broader with AI.
And then finally, and the one that I worry about the most and it doesn't get enough attention, is the replacement risk. The fact that so many human beings will replace relationships they have with other human beings. They'll replace them with AI. And they may be doing this knowledgeably, they may be doing this without knowledge. But, I mean, certainly I see how much in early-stage AI bots’ people are developing actual relationships with these things, particularly young people. And we as humans need communities and families and parents that care about us and take care about us to become social adaptable animals.
And when that's happening through artificial intelligence that not only doesn't care about us, but also doesn't have human beings as a principal interest, principle interest is the business model and the human beings are very much subsidiary and not necessarily aligned, that creates a lot of dysfunction. I fear that a level of dehumanization that could come very, very quickly, especially for young people through addictions and antisocial relationships with AI, which we'll then try to fix through AI bots that can do therapy, is a direction that we really don't want to head on this planet. We will be doing real-time experimentation on human beings. And we never do that with a new GMO food. We never do that with a new vaccine, even when we're facing a pandemic. We shouldn't be doing that with our brains, with our persons, with our souls. And I hope that that gets addressed real fast.
So anyway, that's a little bit for me and the geopolitics of AI, something I'm writing about, thinking about a lot these days. And I hope everyone's well, and I'll talk to you all real soon.
- The transformative potential of artificial intelligence ›
- History Of Artificial Intelligence - GZERO Media ›
- Be more worried about artificial intelligence ›
- Larry Summers: Which jobs will AI replace? ›
- The AI power paradox: Rules for AI's power - GZERO Media ›
- Can we trust AI to tell the truth? - GZERO Media ›
- Rishi Sunak's first-ever UK AI Safety Summit: What to expect - GZERO Media ›
- UK AI Safety Summit brings government leaders and AI experts together - GZERO Media ›
- Social media's AI wave: Are we in for a “deepfakification” of the entire internet? - GZERO Media ›
- Ian Bremmer: On AI regulation, governments must step up to protect our social fabric - GZERO Media ›