Can we govern AI before it’s too late?

Artificial intelligence
Artificial intelligence
GZERO Media

That’s the question I set out to answer in my latest Foreign Affairs deep dive, penned with one of the top minds on artificial intelligence in the world, Inflection AI CEO and Co-Founder Mustafa Suleyman.

Just a year ago, there wasn’t a single world leader I’d meet who would bring up AI. Today, there isn’t a single world leader who doesn’t. In this short time, the explosive debut of generative AI systems like ChatGPT and Midjourney signaled the beginning of a new technological revolution that will remake politics, economies, and societies. For better and for worse.

As governments are starting to recognize, realizing AI’s astonishing upside while containing its disruptive – and destructive – potential may be the greatest governance challenge humanity has ever faced. If governments don’t get it right soon, it’s possible they never will.

Why AI needs to be governed

First, a disclaimer: I’m an AI enthusiast. I believe AI will drive nothing less than a new globalization that will give billions of people access to world-leading intelligence, facilitate impossible-to-imagine scientific advances, and unleash extraordinary innovation, opportunity, and growth. Importantly, we’re heading in this direction without policy intervention: The fundamental technologies are proven, the money is available, and the incentives are aligned for full-steam-ahead progress.

At the same time, artificial intelligence has the potential to cause unprecedented social, economic, political, and geopolitical disruption that upends our lives in lasting and irreversible ways.

In the nearest term, AI will be used to generate and spread toxic misinformation, eroding social trust and democracy; to surveil, manipulate, and subdue citizens, undermining individual and collective freedom; and to create powerful digital or physical weapons that threaten human lives. In the longer run, AI could also destroy millions of jobs, worsening existing inequalities and creating new ones; entrench discriminatory patterns and distort decision-making by amplifying bad information feedback loops; or spark unintended and uncontrollable military escalations that lead to war. Farther out on the horizon lurks the promise of artificial general intelligence (AGI), the still uncertain point where AI exceeds human performance at any given task, and the existential (albeit speculative) peril that an AGI could become self-directed, self-replicating, and self-improving beyond human control.

Experts disagree on which of these risks are more important or urgent. Some lie awake at night fearing the prospect of a superpowerful AGI turning humans into slaves. To me, the real catastrophic threat is humans using ever more powerful and available AI tools for malicious or unintended purposes. But it doesn’t really matter: Given how little we know about what AI might be able to do in the future – what kinds of threats it could pose, how severe and irreversible its damages could be – we should prepare for the worst while hoping for (and working toward) the best.

What makes AI so hard to govern

AI can’t be governed like any previous technology because it’s unlike any previous technology. It doesn’t just pose policy challenges; its unique features also make solving those challenges progressively harder. That is the AI power paradox.

For starters, the pace of AI progress is hyper-evolutionary. Take Moore’s Law, which has successfully predicted the doubling of computing power every two years. The new wave of AI makes that rate of progress seem quaint. The amount of computation used to train the most powerful AI models has increased by a factor of 10 every year for the last 10 years. Processing that once took weeks now happens in seconds. Yesterday’s cutting-edge capabilities are running on smaller, cheaper, and more accessible systems today.

As their enormous benefits become self-evident, AI systems will only grow bigger, cheaper, and more ubiquitous. And with each new order of magnitude, unexpected capabilities will emerge. Few predicted that training on raw text would enable large language models to produce coherent, novel, and even creative sentences. Fewer still expected language models to be able to compose music or solve scientific problems, as some now can. Soon, AI developers will likely succeed in creating systems capable of quasi-autonomy (i.e., able to achieve concrete goals with minimal human oversight) and self-improvement – a critical juncture that should give everyone pause.

Then there’s the ease of AI proliferation. As with any software, AI algorithms are much easier and cheaper to copy and share (or steal) than physical assets. Although the most powerful models still require sophisticated hardware to work, midrange versions can run on computers that can be rented for a few dollars an hour. Soon, such models will run on smartphones. No technology this powerful has become so accessible, so widely, so quickly. All this plays out on a global field: Once released, AI models can and will be everywhere. All it takes is one malign or “breakout” model to wreak worldwide havoc.

AI also differs from older technologies in that almost all of it can be characterized as “general purpose” and “dual use” (i.e., having both military and civilian applications). An AI application built to diagnose diseases might be able to create – and weaponize – a new one. The boundaries between the safely civilian and the militarily destructive are inherently blurred. This makes AI more than just software development as usual; it is an entirely new means of projecting power.

As such, its advancement is being propelled by irresistible incentives. Whether for its repressive capabilities, economic potential, or military advantage, AI supremacy is a strategic objective of every government and company with the resources to compete. At the end of the Cold War, powerful countries might have cooperated to arrest a potentially destabilizing technological arms race. But today’s tense geopolitical environment makes such cooperation much harder. From the vantage point of the world’s two superpowers, the United States and China, the risk that the other side will gain an edge in AI is greater than any theoretical risk the technology might pose to society or to their own domestic political authority. This zero-sum dynamic means that Beijing and Washington are focused on accelerating AI development, rather than slowing it down.

But even if the world’s powers were inclined to contain AI, there’s no guarantee they’d be able to, because, like most of the digital world, every aspect of AI is presently controlled by the private sector. I call this arrangement “technopolar,” with technology companies effectively exerting sovereignty over the rules that apply to their digital fiefdoms at the expense of governments. The handful of large tech firms that currently control AI may retain their advantage for the foreseeable future – or they may be eclipsed by a raft of smaller players as low barriers to entry, open-source development, and near-zero marginal costs lead to uncontrolled proliferation of AI. Either way, AI’s trajectory will be largely determined not by governments but by private businesses and individual technologists who have little incentive to self-regulate.

Any one of these features would strain traditional governance models; all of them together render these models inadequate and make the challenge of governing AI unlike anything governments have faced before.

The “technoprudential" imperative

For AI governance to work, it must be tailored to the specific nature of the technology and the unique challenges it poses. But because the evolution, uses, and risks of AI are inherently unpredictable, AI governance can’t be fully specified at the outset. Instead, it must be as innovative, adaptive, and evolutionary as the technology it seeks to govern.

Our proposal? “Technoprudentialism.” That’s a big word, but essentially it’s about governing AI much in the same way that we govern global finance. The idea is that we need a system to identify and mitigate risks to global stability posed by AI before they occur, without choking off innovation and the opportunities that flow from it, and without getting bogged down by everyday politics and geopolitics. In practice, technoprudentialism requires the creation of multiple complementary governance regimes – each with different mandates, levers, and participants – to address the various aspects of AI that could threaten geopolitical stability, guided by common principles that reflect AI’s unique features.

Mustafa and I argue that AI governance needs to be precautionary, agile, inclusive, impermeable, and targeted. Built atop these principles should be a minimum of three AI governance regimes: an Intergovernmental Panel on Artificial Intelligence for establishing facts and advising governments on the risks posed by AI, an arms control-style mechanism for preventing an all-out arms race between them, and a Geotechnology Stability Board for managing the disruptive forces of a technology unlike anything the world has seen.

The 21st century will throw up few challenges as daunting or opportunities as promising as those presented by AI. Whether our future is defined by the former or the latter depends on what policymakers do next.

More from GZERO Media

As you start checking off everyone on your holiday shopping list, it’s important to remember that more online shopping means more opportunities for cyber scams. But don’t let the Grinch steal your holiday cheer! It’s time to make a list of essential cybersecurity tips — and check it twice — to ensure a safe and merry shopping experience. Unwrap some festive tips to keep your holiday season jolly and scam-free.

Listen: Donald Trump has promised to fix what he calls a broken economy and usher in a “golden age of America.” He’s vowed to implement record tariffs, slash regulation, and deport millions of undocumented immigrants. But what will that mean practically for America’s economic future? On the GZERO World Podcast, Ian Bremmer is joined by Oren Cass, founder and chief economist at the conservative think tank American Compass, to discuss Trump’s economic agenda and why Cass believes it will help American workers and businesses in the long run.

- YouTube

For almost as long as Donald Trump has been in the public eye, his economic worldview has been remarkably consistent: unfair trade deals and globalization have pumped millions into foreign economies while hurting US workers and businesses. That message resonated with voters who feel left behind by the global economy. Trump’s solution? Also very consistent: tariffs. Big ones. On Ian Explains, Ian Bremmer breaks down Donald Trump’s tariff plan and what it could mean for US consumers.

Protesters hold placards during a candlelight vigil to condemn South Korean President Yoon Suk Yeol's surprise declarations of the failed martial law and to call for his resignation in Seoul, South Korea, December 5, 2024.
REUTERS/Kim Kyung-Hoon

On Thursday, Han Dong-hoon, the leader of South Korean President Yoon Suk Yeol’s party, said he was opposed to impeaching Yoon because it would add to national confusion. By Friday, however, he had changed his mind.

A flag is left at the event held by Democratic presidential nominee U.S. Vice President Kamala Harris during Election Night, at Howard University, in Washington, U.S., November 6, 2024.
REUTERS/Daniel Cole
Romanian independent far-right presidential candidate Calin Georgescu poses for a portrait in Bucharest Romania, on Dec. 4, 2024.
REUTERS/Andreea Campeanu

Romanians head to the polls Sunday for a presidential runoff that could lead to significant foreign policy changes for the country – and profound implications for the war in Ukraine.

President-elect Donald Trump attends the 2024 Senior Club Championship award ceremony at his Trump International Golf Club in West Palm Beach, Florida, back in March.
REUTERS/Marco Bello

Amid all the geopolitical chaos, the best advice of the year: Don’t panic.