Making rules for AI … before it’s too late

Ian Bremmer, the founder and president of Eurasia Group and GZERO Media, has joined forces with tech entrepreneur Mustafa Suleyman, the CEO and co-founder of Inflection AI, to take on one of the big questions of this moment in human history: “Can states learn to govern artificial intelligence before it’s too late?”

Their answer is yes. In the next edition of “Foreign Affairs,” already available online here, they’ve offered a series of ideas and principles they say can help world leaders meet this challenge.

Here’s a summary of their argument.

Artificial intelligence will open our lives and societies to impossible-to-imagine scientific advances, unprecedented access to technology for billions of people, toxic misinformation that disrupts democracies, and real economic upheaval.

In the process, it will trigger a seismic shift in the structure and balance of global power.

That’s why AI’s creators have become crucial geopolitical actors. Their sovereignty over AI further entrenches an emerging “techno-polar” international order, one in which governments must compete with tech companies for control over these fast-emerging technologies and their continuous development.

Governments around the world are (slowly) awakening to this challenge, but their attempts to use existing laws and rules to govern AI won’t work, because the complexity of these technologies and the speed of their advance will make it nearly impossible for policymakers and regulators to keep pace.

Policymakers now have a short time in which to build a new governance model to manage this historic transition. If they don’t move quickly, they may never catch up.

If global AI governance is to succeed, the international system must move past traditional ideas of sovereignty by welcoming tech companies to the planning table. More importantly, AI’s unique features must be reflected in its governance.

There are five key principles to follow. AI governance must be:

  1. Precautionary: First, rules should do no harm.
  2. Agile: Rules must be flexible enough to evolve as quickly as the technologies do.
  3. Inclusive: The rule-making process must include all actors, not just governments, that are needed to intelligently regulate AI.
  4. Impermeable: Because a single bad actor or breakaway algorithm can create a universal threat, rules must be comprehensive.
  5. Targeted: Rules must be carefully targeted, rather than one-size-fits-all, because AI is a general-purpose technology that poses multidimensional threats.

Building on these principles, a strong “techno-prudential” model – something akin to the macroprudential role played by global financial institutions like the International Monetary Fund in overseeing risks to the financial system – would mitigate the societal risks posed by AI and ease tensions between China and the United States by reducing the extent to which AI remains an arena — and a tool — of geopolitical competition.

The techno-prudential mandate proposed here would see at least three overlapping governance regimes for different aspects of AI.

The first, like the UN’s International Panel on Climate Change, would be a global scientific body that can objectively advise governments and international bodies on basic definitional questions of AI.

The second would resemble the monitoring and verification approaches of arms control regimes to prevent an all-out arms race between countries like the US and China.

The third, a Geotechnology Stability Board, would, as financial authorities do with monetary policy today, manage the disruptive forces of AI.

Creating several institutions with different competencies to address different aspects of AI will give us the best chance of limiting AI risk without blocking the innovation that can change all our lives for the better.

So, GZERO reader, what do you think? Can and should this path be followed? Share your thoughts with us here.

More from GZERO Media

Malawi soldiers part of the Southern African Development Community (SADC) military mission for eastern Congo, wait for the ceremony to repatriate the two bodies of South African soldiers killed in the ongoing war between M23 rebels and the Congolese army in Goma, North Kivu province of the Democratic Republic of Congo February 20, 2024.
REUTERS/Arlette Bashizi

Fighters from the M23 rebel group in northeastern Congo have been targeting civilians in violation of a July ceasefire agreement, according to the Southern African Development Community, whose peacekeeping mandate was extended by a year on Wednesday.

Ari Winkleman

Donald Trump has promised a laundry list of things he will accomplish “on Day 1” in office. To name a few, he has vowed to immediately begin a mass deportation of immigrants, streamline the federal government, pardon Jan. 6 rioters, and roll back the Biden administration’s education and climate policies.

Ambassador Robert Wood of the US raises his hand to vote against the ceasefire resolution at the United Nations Security Council, on November 20, 2024.
Lev Radin/Sipa USA, via Reuters
- YouTube

Ukraine has launched US-made long-range missiles into Russia for the first time. Will this change the course of the war? How likely will Trump be able to carry out mass deportations when he's in office? Will there be political fallout from Hong Kong's decision to jail pro-democracy activists? Ian Bremmer shares his insights on global politics this week on World In :60.

A man rushes past members of security forces during clashes between gangs and security forces, in Port-au-Prince, Haiti November 11, 2024.
REUTERS/Marckinson Pierre

The UN Humanitarian Air Service is scheduled to restart flights to Haiti on Wednesday, a week after several planes attempting to land at Port-au-Prince airport came under small arms fire.