The AI power paradox: Rules for AI's power

Rule for AI's power | Quick Take | GZERO Media

Ian Bremmer's Quick Take: Hi everybody, Ian Bremmer here and a piece to share with you that I've just completed with Mustafa Suleyman, my good friend, the founder, co-founder of DeepMind, and now Inflection AI, in Foreign Affairs.

This issue called, "The AI Power Paradox - Can states learn to govern artificial intelligence before it’s too late?" The biggest change in power and governance in a very short period of time that I've experienced in my lifetime.

It's how to deal with artificial intelligence. Just a year ago, there wasn't a head of state I would meet with that was asking anything about AI, and now it's in every single meeting. And in part, that's because of how explosive this technology itself has suddenly become in terms of both its staggering upside. I mean, when you get anyone with a smartphone that has access to some of the, you know, global levels of intelligence on in an education field, in a health care field,in a managerial field. I mean, not just access to information and communication, but access to intelligence and the ability to take action with it. That is a game changer for globalization and for productivity of the sort that we've never seen before in such a short period of time all across the world. And yet those same technologies can be used in very disruptive ways by bad actors all over the world, and not just governments, but organizations and individual people. And that's a big piece of why these leaders are concerned.

They're concerned about can we still run an election that's free and fair and people will believe in? Will we be able to limit the proliferation of bad actors to develop and distribute malware or bio weapons? And will people, intellectual workers still have, or white collar workers still have jobs, have productive things to do, but also because the top issues that policymakers are concerned about are also affected very dramatically by AI, whether it's how you think about the war with Russia and Russia's ability to be a disruptive actor or it’s US-China relations, and to what extent that continues to be a reasonably stable and constructive interdependent relationship.

And also the United States and other advanced industrial democracies, can they persist as functional democracies given the proliferation of AI? So everyone's worried about it. Everyone has urgency. Very few people know what to do. So a few big takeaways from us in this piece.

The big concept that we think should infuse AI governance is techno-prudentialism. That's a big long word, but it's aligned with macro-prudentialism and it's aligned with the way that global finance has been governed. The idea that you need to identify and limit risks to global stability in AI without choking off innovation and the opportunities that come from it. And that's the way the financial stability board works. The Bank of International Settlements, the IMF, despite all of the conflict between the United States and China and the Europeans, they all work together in those institutions. They do it because global finance is too important to allow it to break. It fundamentally needs to be and is global. It has the potential to cause systemic contagion and collapse and everyone wants to work against and mitigate that.

So techno-prudentialism would be applying that to the AI space. With that as a backdrop, we see five principles that should direct AI governance. When you're thinking about governing AI that you want to keep these principles in mind. Number one, the precautionary principle - do no harm. Obvious in the medical field, it needs to be obvious in the AI field. So incredibly suffused with opportunities for global growth, but also enormously dangerous. Caution has to be in place because tinkering with these institutions, creating, creating capabilities for regulation can be incredibly dangerous and also can cut off incredible innovation. So that level of humility, as we think about governing, a completely new set of technologies that will change very, very quickly, should be number one.

Number two, agile, because these technologies are changing so quickly, the institutions and the architecture that you create need to themselves be very flexible. They need to be able to adapt and course correct as AI itself evolves and improves. Usually we put architecture together and it's meant to be as strong and stable as humanly possible that nothing could break it. And that also means it usually can't change very much. Whether you talk about the Security Council of the United Nations or NATO, or the European Union. Not the way you need to think about AI governance.

Inclusive. It needs to be a hybrid system. Technology companies are the dominant actors in artificial intelligence. They exert fundamental sovereignty. What I call a techno-polar order. And we believe that any institutions that govern AI will have to have both technology companies and governments at the table. That doesn't mean tech companies get equal votes, but they're going to have to be directly and mutually involved in governance because the governments don't have the expertise. They don't understand what these algorithms do, and they're not driving the future.

Impermeable. They have to be global. You can't have slippage when you're talking about technologies that if individual actors have their hands on it and can use it for whatever purposes, that it's incredibly dangerous. They can't be fragmented institutions. They can't be institutions that allow some percentage of AI companies and developers to not be a part of it. They'll have to be easy in and very hard out for the architecture that's created.

And then finally, targeted. This is not one-size-fits-all. AI ends up impacting every sector of the global economy, and there will be very, very different types of institutions for different needs that will need to be created. So those are the principles of AI governance. What kind of institutions do we need? The first, like we have through the United Nations on climate change, the Intergovernmental Panel on Climate Change. We need that for artificial intelligence. We need that with the kind of models,the data, the training models that are being done, the algorithms that are being developed and deployed, that you need to have all of the actors in one space that are sharing the same set of facts, which we don't have right now. Everyone's touching different pieces of the elephant. So an intergovernmental panel on artificial intelligence.

A second would be a geotechnology stability board. And this is the group of both national and technology actors that together can react when dangerous disruption occur. Weaponization from cyber criminals, or state sponsored actors, or lone wolvesas will inevitably occur. Those responses will need to be global because everyone has a huge stake in not allowing these technologies to suddenly undermine governance on the planet. And finally, we're going to need to have some form of US-China collaboration that looks like the hardest piece of it to put together right now, because of course, we don't even talk on defense matters at a high level at all.

And the politics are in a very different direction. But with the Americans and the Soviets, we knew that we had access to these weapons of mass destruction. Even though we hated each other. We knew we had to talk about it. So we didn't blow each other up, what our capabilities were and what capabilities we thought were too dangerous to be able to develop. That kind of communication needs to happen between the US and China and its top technology actors, especially because not only will some of these technologies be existentially threatening, but also because there are lots of them will very quickly be in the hands of actors that developed and countries with a lot at stake in maintaining the existing system will not want to see as a threat. And, you know, not that we believe that you can set that up today, but rather that you want the principals of governments and corporations to be talking about it now so that when the first crises start emerging, they will already be prepared in this direction. They will have a toolkit that they will then be able to take out and start working on.

So that's the view of the piece. I suspect we'll be talking about an awful lot over the course of the coming weeks and months. I hope you find this interesting and worthwhile and we've got a link to the piece that we'll be sending on. Have a look at it. Talk to you soon. Bye.

More from GZERO Media

In this special episode of the podcast series "Energized: The Future of Energy”, by GZERO Media's Blue Circle Studios and Enbridge, host JJ Ramberg and Enbridge CEO Greg Ebel talk to Arjun Murti, partner at Veriten and founder of the energy transition newsletter Super-Spiked. They look at the impact of President Trump’s new energy policies, North America’s role in the global energy transition, and the possible effects of tariffs and trade tensions on the energy sector. Listen to this episode at gzeromedia.com/energized, or on Apple, Spotify, Goodpods, or wherever you get your podcasts.

Listen: What does global energy transition look like in a time of major geopolitical change, including rebalancing of trade? In this special episode of "Energized: The Future of Energy,” host JJ Ramberg and Enbridge CEO Greg Ebel talk to Arjun Murti, partner at Veriten and founder of the energy transition newsletter Super-Spiked. They discuss the impact of President Trump’s new energy policies, the role of North America in the global energy transition, and the possible impact of tariffs and trade tension on the energy sector.

President Donald Trump speaks as he signs executive orders and proclamations in the Oval Office at the White House on April 9, 2025.

REUTERS/Nathan Howard

With stock markets plunging and US Treasury yields reaching new heights, Donald Trump finally reneged on parts of his widescale tariff plan on Wednesday, declaring a 90-day pause to the far-reaching “reciprocal” levies that he introduced just one week ago while leaving a 10% across-the-board duty in place. He also escalated the already-burgeoning trade war with China by increasing the tariff on their imports to 125%.

EU and Chinese flags in an illustration.

REUTERS/Dado Ruvic/Illustration

European leaders have much to worry about concerning trade and economic growth, and they’re exploring their options with China at a time when Beijing has a strategic interest in helping to divide the US from Europe. Demonstrating to EU leaders that China can become a force for stability in global trade at a time when Donald Trumpis waging a trade war on allies and rivals alike would further that goal.

Democratic Republic of Congo's former President Joseph Kabila, attends a memorial service of Sam Nujoma, who became Namibia's first democratically elected president., February 28, 2025.
REUTERS/Siphiwe Sibeko

Former Democratic Republic of Congo President Joseph Kabila has announced his return to the country, vowing to halt the rapid advance of the Rwanda-backed M23 rebels who have seized significant territory in the country’s conflict-ridden east.

From left to right, Prime Minister of Bavaria Markus Soeder, Chairman of the CDU Friedrich Merz, Heads of the SPD Lars Klingbeil, and Saskia Esken arrive at a press conference after successful coalition negotiations in Berlin, Germany, on April 9, 2025.
Emmanuele Contini/NurPhoto via Reuters

Germany’s leading establishment parties reached a grand coalition deal on Wednesday, bringing Europe’s largest economy a step closer to having a formal government amid severe domestic and global challenges.

Jess Frampton

Globalization helped make the United States the most prosperous nation in history. But many Americans feel they haven’t benefited from free trade and voted for Donald Trump to “liberate” them from the system the United States built over the past 80 years. He is delivering.