Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
AI
But after the boom, often comes the bust. China’s experience can be both a roadmap and a warning. The results of its building spree have been astounding: more high-speed rail than the rest of the world combined, soaring GDP growth, hundreds of millions lifted into the middle class. But the People’s Republic is now dealing with a stagnating economy. Local governments that financed all that construction are drowning in debt. China bet on physical infrastructure. The US is gambling on digital. If AI doesn’t deliver on its promise, both could end up in the same place: buried under the weight of their own ambition.
GZERO World with Ian Bremmer, the award-winning weekly global affairs series, airs nationwide on US public television stations (check local listings).
New digital episodes of GZERO World are released every Monday on YouTube. Don't miss an episode: subscribe to GZERO's YouTube channel and turn on notifications (🔔). GZERO World with Ian Bremmer airs on US public television weekly - check local listings.
How do we ensure AI is trustworthy in an era of rapid technological change?
Baroness Joanna Shields, Executive Chair of the Responsible AI Future Foundation, says it starts with principles of responsible AI and a commitment to ethical development.
Shields explains that her foundation’s work “is about empowering everyone equally and enabling others to level up and be part of this revolution,” highlighting its focus on guiding the ethical development and use of AI.
She emphasizes the critical importance of information integrity, warning that AI systems trained on social media data risk amplifying conspiracy theories and divisive content. Reflecting on her experience at Meta, Shields notes, “Models that are trained with social media data… will further embed and create communities where people are… exposed to damaging content,” underscoring the need for transparency and awareness in AI-generated information.
Shields shared these insights at the 2025 Abu Dhabi Global AI Summit panel “Bringing AI Technology, Trust, and Talent to the World,” part of GZERO Media’s Global Stage series in partnership with Microsoft, which brings together global leaders to discuss the geopolitical and technological trends shaping our world.
As AI begins to understand us better than we understand ourselves, who will decide how it shapes our world?
Ian Bremmer cautions, "The winner or the winners are going to determine in large part what society looks like, what the motivating ideologies are." He stresses that AI’s direction is driven not by technology alone, but by the humans who design and program these systems.
"That's kind of why you need the UN and you need responsible AI governance as part of the conversation," Bremmer adds.
Ian spoke at the 2025 Abu Dhabi Global AI Summit panel “Bringing AI Technology, Trust, and Talent to the World,” part of GZERO Media’s Global Stage series in partnership with Microsoft. The Global Stage series convenes global leaders for critical discussions on the geopolitical and technological trends shaping our world.
Who really shapes and influences the development of AI? The creators or the users?
Peng Xiao, Group CEO, G42, argues it’s both. “I actually do not subscribe that the creators have so much control they can program every intent into this technology so users can only just respond and be part of that design,” he explains. He stresses, “The more a society uses AI, the more we can influence the development of it. We are co-creators, co-influencers of this technology.”
Highlighting the UAE’s national AI strategy, Xiao points to Mohamed bin Zayed University of Artificial Intelligence, where undergraduates as young as 16 are founding their own companies.
The UAE has also launched programs teaching AI to learners aged 7 to 70 and is deploying billions of AI agents to augment productivity across industries, including oil, cybersecurity, and agriculture.
Xiao spoke at the 2025 Abu Dhabi Global AI Summit panel “Bringing AI Technology, Trust, and Talent to the World,” part of GZERO Media’s Global Stage series in partnership with Microsoft. The Global Stage series convenes global leaders for critical discussions on the geopolitical and technological trends shaping our world.
As artificial intelligence transforms work, how do organizations equip people with the skills to thrive?
Brad Smith, Vice Chair and President of Microsoft, says the answer lies in understanding a new landscape of AI skills.
Speaking at the 2025 Abu Dhabi Global AI Summit, Smith outlined three key skills needed in the AI era:
1. AI fluency, the ability to use AI tools effectively;
2. AI engineering, focused on building advanced AI applications; and
3. Organizational leadership, which emphasizes guiding teams through cultural and operational change
He also highlighted global disparities in AI adoption: “We are in the global capital today of AI adoption … the UAE leads the world with roughly a 59% per capita adoption rate … the United States is only 29%.”
Smith shared these insights during the panel “Bringing AI Technology, Trust, and Talent to the World,” part of GZERO Media’s Global Stage series in partnership with Microsoft, which brings together global leaders to discuss the geopolitical and technological trends shaping our world.
President Joe Biden signs an executive order about artificial intelligence as Vice President Kamala Harris looks on at the White House on Oct. 30, 2023.
US President Joe Biden on Monday signed an expansive executive order about artificial intelligence, ordering a bevy of government agencies to set new rules and standards for developers with regard to safety, privacy, and fraud. Under the Defense Production Act, the administration will require AI developers to share safety and testing data for the models they’re training — under the guise of protecting national and economic security. The government will also develop guidelines for watermarking AI-generated content and fresh standards to protect against “chemical, biological, radiological, nuclear, and cybersecurity risks.”
The US order comes the same day that G7 countries agreed to a “code of conduct” for AI companies, an 11-point plan called the “Hiroshima AI Process.” It also came mere days before government officials and tech-industry leaders meet in the UK at a forum hosted by British Prime Minister Rishi Sunak. The event will run tomorrow and Thursday, Nov. 1-2, at Bletchley Park. While several world leaders have passed on attending Sunak’s summit, including Biden and Emmanuel Macron, US Vice President Kamala Harris and European Commission President Ursula von der Leyen plan to participate.
When it comes to AI regulation, the UK is trying to differentiate itself from other global powers. Just last week, Sunak said that “the UK’s answer is not to rush to regulate” artificial intelligence while also announcing the formation of a UK AI Safety Institute to study “all the risks, from social harms like bias and misinformation through to the most extreme risks of all.”
The two-day summit will focus on the risks of AI and its use of large language models trained by huge amounts of text and data.
Unlike von der Leyen’s EU, with its strict AI regulation, the UK seems more interested in attracting AI firms than immediately reining them in. In March, Sunak’s government unveiled its plan for a “pro-innovation” approach to AI regulation. In announcing the summit, the government’s Department for Science, Innovation, and Technology boasted the country’s “strong credentials” in AI: employing 50,000 people, bringing £3.7 billion to the domestic economy, and housing key firms like DeepMind (now owned by Google), while also investing £100 million in AI safety research.
Despite the UK’s light-touch approach so far, the Council on Foreign Relations described the summit as an opportunity for the US and UK, in particular, to align on policy priorities and “move beyond the techno-libertarianism that characterized the early days of AI policymaking in both countries.”- UK AI Safety Summit brings government leaders and AI experts together - GZERO Media ›
- AI agents are here, but is society ready for them? - GZERO Media ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity - GZERO Media ›
- Should we regulate generative AI with open or closed models? - GZERO Media ›
- Podcast: Talking AI: Sociologist Zeynep Tufekci explains what's missing in the conversation - GZERO Media ›
- OpenAI is risk-testing Voice Engine, but the risks are clear - GZERO Media ›
- One big thing missing from the AI conversation | Zeynep Tufekci - GZERO Media ›
As AI reshapes the global economy, who gets left behind and how can developing nations catch up?
At the 2025 Abu Dhabi Global AI Summit, UNCTAD Secretary-General Rebeca Grynspan warns that without deliberate action, the world’s poorest countries risk exclusion from the AI revolution. “There is no way that trickle down will make the trick,” she tells GZERO Media’s Tony Maciulis. “We have to think about inclusion by design.”
Grynspan stresses that financing and investment, not just aid, are critical: “3.4 billion people live in countries spending more on debt service than on health or education.” She calls for the World Bank and IMF to “assume more risk” to help scale private investment in developing economies.
Despite rising tariffs and trade tensions, she notes trade remains resilient driven by digital services, AI innovation, and the growing need for smarter global cooperation.
This conversation is part of GZERO Media’s Global Stage series, presented in partnership with Microsoft.





