Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
An inflection point for Microsoft
Microsoft made headlines last week, hiring Mustafa Suleyman to lead its internal AI group. Suleyman is a big name in the world of artificial intelligence, namely because he co-founded the influential British research lab DeepMind that was acquired by Google in 2014 for over $500 million. But in hiring Suleyman, Microsoft also kinda, maybe, sorta acquired his current AI startup, called Inflection AI.
Microsoft didn’t just hire Suleyman and co-founder Karén Simonyan, but it hired “most of the staff” of the $4 billion startup. It then paid the remaining husk of Inflection $650 million to license its technology, which Inflection is using to pay off its remaining investors. It’s as close to an acquisition as you can get without actually buying a company. And there's a good reason for this: The current antitrust environment is tough for tech. The government has a watchful eye on mergers and so, Big Tech has often opted against buying startups outright: We’ve seen Microsoft invest $13 billion in OpenAI, while Amazon and Google have each poured billions each into Anthropic.
But the government has broad authority over mergers, even if they’re partial or untraditional in nature, experts told GZERO recently. Put simply, we’d be surprised if this acqui-hire of sorts is enough to deter the government’s antitrust enforcers, who are already sniffing around Microsoft’s investment and power over OpenAI.Microsoft's big-name hire
What a splash! Microsoft announced earlier today that it has hired one of the most prominent figures in the AI revolution: Mustafa Suleyman. Suleyman co-founded the British AI research lab DeepMind, which Google acquired for £400 million in 2014 (~$656 million).
Suleyman will run a new division called Microsoft AI, overseeing its Copilot and Bing products, among others. Microsoft has become a major player in generative AI through its $13 billion investment in ChatGPT-maker OpenAI, whose deep-learning language models now fuel Microsoft's own AI offerings. He will focus on advancing consumer products — in other words, getting you to use this cutting-edge tech.
AI's rapid rise
In a remarkable shift, AI has catapulted to the forefront of global conversations within a span of just one year. From political leaders to multilateral organizations, the dialogue has swiftly transitioned from mere curiosity to deep-seated concern. Ian Bremmer, founder and president of GZERO Media and Eurasia Group, says AI transcends traditional geopolitical boundaries. Notably, the reins of AI's dominion rest not in governments but predominantly within the hands of technology corporations.
This unconventional dynamic prompts a recalibration of governance strategies. Unlike past challenges that could be addressed in isolation, AI's complexity necessitates collaboration with its creators—engineers, scientists, technologists, and corporate leaders. The emergence of a new era, where technology companies hold significant sway, has redefined the political landscape. The journey to understand and govern AI is a collaborative endeavor that promises both learning and transformation.
Watch the full conversation: Governing AI Before It’s Too Late
Watch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- The AI power paradox: Rules for AI's power ›
- Podcast: Artificial intelligence new rules: Ian Bremmer and Mustafa Suleyman explain the AI power paradox ›
- Ian Bremmer explains: Should we worry about AI? ›
- The geopolitics of AI ›
- Making rules for AI … before it’s too late ›
- How should artificial intelligence be governed? ›
- How AI can be used in public policy: Anne Witkowsky - GZERO Media ›
AI governance: Cultivating responsibility
Mustafa Suleyman, a prominent voice in the AI landscape and CEO & co-founder of Inflection AI, contends that effective regulation transcends legal frameworks—it encompasses a culture of self-regulation and informed regulatory comprehension. Today's AI leaders exhibit a unique blend of optimism and caution, recognizing both the transformative potential and potential pitfalls of AI technologies. Suleyman underscores the paradigm shift compared to the era of social media dominance.
This time, AI leaders have been proactive in raising concerns and questions about the technology's impact. Balancing innovation's pace with prudent safeguards is the goal, acknowledging that through collective efforts, the benefits of AI can far outweigh its drawbacks. Suleyman highlights that advanced AI models are increasingly controllable and capable of producing desired, safe outputs. He encourages external oversight and welcomes regulation as a proactive and thoughtful measure. The message is clear: the path to harnessing AI's power lies in fostering a culture of responsible development and collaborative regulatory action.
Watch the full conversation: Governing AI Before It’s Too Late
Watch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- Podcast: Artificial intelligence new rules: Ian Bremmer and Mustafa Suleyman explain the AI power paradox ›
- How should artificial intelligence be governed? ›
- Making rules for AI … before it’s too late ›
- The AI power paradox: Rules for AI's power ›
- Is life better than ever for the human race? - GZERO Media ›
- AI and data regulation in 2023 play a key role in democracy - GZERO Media ›
Insights on AI governance and global stability
Ian Bremmer and Mustafa Suleyman, CEO and co-founder of Inflection AI, delve into the realm of AI governance and its vital role in shaping our rapidly evolving world. Just like the macro-prudential policies that govern global finance, society now find itself in need of techno-prudential policies to ensure that artificial intelligence (AI) flourishes without compromising global stability. AI presents multi-faceted challenges, including disinformation, technology proliferation, and the urgent need to strike a balance between innovation and risk management.
A vision for inclusive AI governance
Casting a spotlight on the intricate landscape of AI governance, Ian Bremmer, president and founder of GZERO Media and Eurasia Group, and Mustafa Suleyman, CEO and co-founder of Inflection AI, eloquently unravel the pressing need for collaboration between governments, advanced industrial players, corporations, and a diverse spectrum of stakeholders in the AI domain. The exponential pace of this technological evolution demands a united front and the stakes have never been higher. There is urgency of getting AI governance right while the perils of getting it wrong could be catastrophic. While tech giants acknowledge this necessity, they remain engrossed in their domains, urging the imperative for collective action.
Mustafa vividly illustrates the competitive dynamics among AI developers vying for supremacy, stressing that cooperation between corporations and governments is pivotal. Ian emphasizes the existing techno-polar world and the importance of inclusivity in shaping AI's trajectory. The discourse emphasizes that the way forward isn't confined to legislative channels, but rather a tapestry woven with non-governmental organizations, academics, critics, and civil society entities. Mustafa propounds the notion that diversity and inclusivity breed resilience. The duo makes a compelling case for stakeholders' collaboration. They draw a parallel between their alignment and the potential accord between major tech leaders and governments.
Watch the full conversation: Governing AI Before It’s Too Late
Watch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- Podcast: Artificial intelligence new rules: Ian Bremmer and Mustafa Suleyman explain the AI power paradox ›
- How should artificial intelligence be governed? ›
- Making rules for AI … before it’s too late ›
- The AI power paradox: Rules for AI's power ›
- How AI can be used in public policy: Anne Witkowsky - GZERO Media ›
- Use new data to fight climate change & other challenges: UN tech envoy - GZERO Media ›
- Rishi Sunak's first-ever UK AI Safety Summit: What to expect - GZERO Media ›
- State of the World with Ian Bremmer: December 2023 ›
- AI's evolving role in society - GZERO Media ›
- AI plus existing technology: A recipe for tackling global crisis - GZERO Media ›
- AI and data regulation in 2023 play a key role in democracy - GZERO Media ›
Podcast: Artificial intelligence new rules: Ian Bremmer and Mustafa Suleyman explain the AI power paradox
Listen: Dive into the world of artificial intelligence in our new GZERO World podcast episode. Ian Bremmer, founder of Eurasia Group and GZERO Media, teams up with Mustafa Suleyman, CEO of Inflection AI, to discuss their groundbreaking article titled, “The AI Power Paradox,” recently published in Foreign Affairs magazine. Uncover the explosive growth and potential risks of generative AI and explore Ian and Mustafa’s proposed 5 principles for effective AI governance. Join host Evan Solomon as he delves into the crucial conversation about regulating AI before it spirals out of control and without stifling innovation. Tune in for insights on technology, politics, and securing our global future.
- The geopolitics of AI ›
- How should artificial intelligence be governed? ›
- Making rules for AI … before it’s too late ›
- Governing AI Before It’s Too Late ›
- The AI power paradox: Rules for AI's power ›
- A vision for inclusive AI governance - GZERO Media ›
- Artificial intelligence: How soon will we see meaningful progress? - GZERO Media ›
- Stop AI disinformation with laws & lawyers: Ian Bremmer & Maria Ressa - GZERO Media ›
- AI's rapid rise - GZERO Media ›
- Ian Bremmer: On AI regulation, governments must step up to protect our social fabric - GZERO Media ›
- Staving off "the dark side" of artificial intelligence: UN Deputy Secretary-General Amina Mohammed - GZERO Media ›
The AI power paradox: Rules for AI's power
Ian Bremmer's Quick Take: Hi everybody, Ian Bremmer here and a piece to share with you that I've just completed with Mustafa Suleyman, my good friend, the founder, co-founder of DeepMind, and now Inflection AI, in Foreign Affairs.
This issue called, "The AI Power Paradox - Can states learn to govern artificial intelligence before it’s too late?" The biggest change in power and governance in a very short period of time that I've experienced in my lifetime.
It's how to deal with artificial intelligence. Just a year ago, there wasn't a head of state I would meet with that was asking anything about AI, and now it's in every single meeting. And in part, that's because of how explosive this technology itself has suddenly become in terms of both its staggering upside. I mean, when you get anyone with a smartphone that has access to some of the, you know, global levels of intelligence on in an education field, in a health care field,in a managerial field. I mean, not just access to information and communication, but access to intelligence and the ability to take action with it. That is a game changer for globalization and for productivity of the sort that we've never seen before in such a short period of time all across the world. And yet those same technologies can be used in very disruptive ways by bad actors all over the world, and not just governments, but organizations and individual people. And that's a big piece of why these leaders are concerned.
They're concerned about can we still run an election that's free and fair and people will believe in? Will we be able to limit the proliferation of bad actors to develop and distribute malware or bio weapons? And will people, intellectual workers still have, or white collar workers still have jobs, have productive things to do, but also because the top issues that policymakers are concerned about are also affected very dramatically by AI, whether it's how you think about the war with Russia and Russia's ability to be a disruptive actor or it’s US-China relations, and to what extent that continues to be a reasonably stable and constructive interdependent relationship.
And also the United States and other advanced industrial democracies, can they persist as functional democracies given the proliferation of AI? So everyone's worried about it. Everyone has urgency. Very few people know what to do. So a few big takeaways from us in this piece.
The big concept that we think should infuse AI governance is techno-prudentialism. That's a big long word, but it's aligned with macro-prudentialism and it's aligned with the way that global finance has been governed. The idea that you need to identify and limit risks to global stability in AI without choking off innovation and the opportunities that come from it. And that's the way the financial stability board works. The Bank of International Settlements, the IMF, despite all of the conflict between the United States and China and the Europeans, they all work together in those institutions. They do it because global finance is too important to allow it to break. It fundamentally needs to be and is global. It has the potential to cause systemic contagion and collapse and everyone wants to work against and mitigate that.
So techno-prudentialism would be applying that to the AI space. With that as a backdrop, we see five principles that should direct AI governance. When you're thinking about governing AI that you want to keep these principles in mind. Number one, the precautionary principle - do no harm. Obvious in the medical field, it needs to be obvious in the AI field. So incredibly suffused with opportunities for global growth, but also enormously dangerous. Caution has to be in place because tinkering with these institutions, creating, creating capabilities for regulation can be incredibly dangerous and also can cut off incredible innovation. So that level of humility, as we think about governing, a completely new set of technologies that will change very, very quickly, should be number one.
Number two, agile, because these technologies are changing so quickly, the institutions and the architecture that you create need to themselves be very flexible. They need to be able to adapt and course correct as AI itself evolves and improves. Usually we put architecture together and it's meant to be as strong and stable as humanly possible that nothing could break it. And that also means it usually can't change very much. Whether you talk about the Security Council of the United Nations or NATO, or the European Union. Not the way you need to think about AI governance.
Inclusive. It needs to be a hybrid system. Technology companies are the dominant actors in artificial intelligence. They exert fundamental sovereignty. What I call a techno-polar order. And we believe that any institutions that govern AI will have to have both technology companies and governments at the table. That doesn't mean tech companies get equal votes, but they're going to have to be directly and mutually involved in governance because the governments don't have the expertise. They don't understand what these algorithms do, and they're not driving the future.
Impermeable. They have to be global. You can't have slippage when you're talking about technologies that if individual actors have their hands on it and can use it for whatever purposes, that it's incredibly dangerous. They can't be fragmented institutions. They can't be institutions that allow some percentage of AI companies and developers to not be a part of it. They'll have to be easy in and very hard out for the architecture that's created.
And then finally, targeted. This is not one-size-fits-all. AI ends up impacting every sector of the global economy, and there will be very, very different types of institutions for different needs that will need to be created. So those are the principles of AI governance. What kind of institutions do we need? The first, like we have through the United Nations on climate change, the Intergovernmental Panel on Climate Change. We need that for artificial intelligence. We need that with the kind of models,the data, the training models that are being done, the algorithms that are being developed and deployed, that you need to have all of the actors in one space that are sharing the same set of facts, which we don't have right now. Everyone's touching different pieces of the elephant. So an intergovernmental panel on artificial intelligence.
A second would be a geotechnology stability board. And this is the group of both national and technology actors that together can react when dangerous disruption occur. Weaponization from cyber criminals, or state sponsored actors, or lone wolvesas will inevitably occur. Those responses will need to be global because everyone has a huge stake in not allowing these technologies to suddenly undermine governance on the planet. And finally, we're going to need to have some form of US-China collaboration that looks like the hardest piece of it to put together right now, because of course, we don't even talk on defense matters at a high level at all.
And the politics are in a very different direction. But with the Americans and the Soviets, we knew that we had access to these weapons of mass destruction. Even though we hated each other. We knew we had to talk about it. So we didn't blow each other up, what our capabilities were and what capabilities we thought were too dangerous to be able to develop. That kind of communication needs to happen between the US and China and its top technology actors, especially because not only will some of these technologies be existentially threatening, but also because there are lots of them will very quickly be in the hands of actors that developed and countries with a lot at stake in maintaining the existing system will not want to see as a threat. And, you know, not that we believe that you can set that up today, but rather that you want the principals of governments and corporations to be talking about it now so that when the first crises start emerging, they will already be prepared in this direction. They will have a toolkit that they will then be able to take out and start working on.
So that's the view of the piece. I suspect we'll be talking about an awful lot over the course of the coming weeks and months. I hope you find this interesting and worthwhile and we've got a link to the piece that we'll be sending on. Have a look at it. Talk to you soon. Bye.
- Making rules for AI … before it’s too late ›
- Governing AI Before It’s Too Late ›
- The AI arms race begins: Scott Galloway’s optimism & warnings ›
- The geopolitics of AI ›
- How should artificial intelligence be governed? ›
- Be more worried about artificial intelligence ›
- A vision for inclusive AI governance - GZERO Media ›
- Scared of rogue AI? Keep humans in the loop, says Microsoft's Natasha Crampton - GZERO Media ›
- How tech companies aim to make AI more ethical and responsible - GZERO Media ›
- How AI can be used in public policy: Anne Witkowsky - GZERO Media ›
- AI's role in the Israel-Hamas war so far - GZERO Media ›
- Stop AI disinformation with laws & lawyers: Ian Bremmer & Maria Ressa - GZERO Media ›
- State of the World with Ian Bremmer: December 2023 ›
- A world of conflict: The top risks of 2024 - GZERO Media ›
- Davos 2024: AI is having a moment at the World Economic Forum - GZERO Media ›
- How is the world tackling AI, Davos' hottest topic? - GZERO Media ›
- Grown-up AI conversations are finally happening, says expert Azeem Azhar - GZERO Media ›
- Podcast: Artificial intelligence new rules: Ian Bremmer and Mustafa Suleyman explain the AI power paradox - GZERO Media ›
- AI's rapid rise - GZERO Media ›
- AI plus existing technology: A recipe for tackling global crisis - GZERO Media ›
- Staving off "the dark side" of artificial intelligence: UN Deputy Secretary-General Amina Mohammed - GZERO Media ›