Search
AI-powered search, human-powered content.
scroll to top arrow or icon

Making rules for AI … before it’s too late

Making rules for AI … before it’s too late
Senior Writer

Ian Bremmer, the founder and president of Eurasia Group and GZERO Media, has joined forces with tech entrepreneur Mustafa Suleyman, the CEO and co-founder of Inflection AI, to take on one of the big questions of this moment in human history: “Can states learn to govern artificial intelligence before it’s too late?”

Their answer is yes. In the next edition of “Foreign Affairs,” already available online here, they’ve offered a series of ideas and principles they say can help world leaders meet this challenge.


Here’s a summary of their argument.

Artificial intelligence will open our lives and societies to impossible-to-imagine scientific advances, unprecedented access to technology for billions of people, toxic misinformation that disrupts democracies, and real economic upheaval.

In the process, it will trigger a seismic shift in the structure and balance of global power.

That’s why AI’s creators have become crucial geopolitical actors. Their sovereignty over AI further entrenches an emerging “techno-polar” international order, one in which governments must compete with tech companies for control over these fast-emerging technologies and their continuous development.

Governments around the world are (slowly) awakening to this challenge, but their attempts to use existing laws and rules to govern AI won’t work, because the complexity of these technologies and the speed of their advance will make it nearly impossible for policymakers and regulators to keep pace.

Policymakers now have a short time in which to build a new governance model to manage this historic transition. If they don’t move quickly, they may never catch up.

If global AI governance is to succeed, the international system must move past traditional ideas of sovereignty by welcoming tech companies to the planning table. More importantly, AI’s unique features must be reflected in its governance.

There are five key principles to follow. AI governance must be:

  1. Precautionary: First, rules should do no harm.
  2. Agile: Rules must be flexible enough to evolve as quickly as the technologies do.
  3. Inclusive: The rule-making process must include all actors, not just governments, that are needed to intelligently regulate AI.
  4. Impermeable: Because a single bad actor or breakaway algorithm can create a universal threat, rules must be comprehensive.
  5. Targeted: Rules must be carefully targeted, rather than one-size-fits-all, because AI is a general-purpose technology that poses multidimensional threats.

Building on these principles, a strong “techno-prudential” model – something akin to the macroprudential role played by global financial institutions like the International Monetary Fund in overseeing risks to the financial system – would mitigate the societal risks posed by AI and ease tensions between China and the United States by reducing the extent to which AI remains an arena — and a tool — of geopolitical competition.

The techno-prudential mandate proposed here would see at least three overlapping governance regimes for different aspects of AI.

The first, like the UN’s International Panel on Climate Change, would be a global scientific body that can objectively advise governments and international bodies on basic definitional questions of AI.

The second would resemble the monitoring and verification approaches of arms control regimes to prevent an all-out arms race between countries like the US and China.

The third, a Geotechnology Stability Board, would, as financial authorities do with monetary policy today, manage the disruptive forces of AI.

Creating several institutions with different competencies to address different aspects of AI will give us the best chance of limiting AI risk without blocking the innovation that can change all our lives for the better.

So, GZERO reader, what do you think? Can and should this path be followed? Share your thoughts with us here.