UK AI Safety Summit brings government leaders and AI experts together

Behind the Scene at the first-ever UK AI Safety Summit | GZERO AI | GZERO Media

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she takes you behind the scenes of the first-ever UK AI Safety Summit.

Last week, the AI Summit took place, and I'm sure you've read all the headlines, but I thought it would be fun to also take you behind the scenes a little bit. So I arrived early in the morning of the day that the summit started, and everybody was made to go through security between 7 and 8 AM, so pretty early, and the program only started at 10:30. So what that led to was a longstanding reception over coffee where old friends and colleagues met, new people were introduced, and all participants from business, government, civil society, academia really started to mingle.

And maybe that was a part of the success of the summit, which then started with a formal opening with remarkably global representation. There had been some discussion about whether it was appropriate to invite the Chinese government, but indeed a Chinese minister, but also from India, from Nigeria, were there to underline that the challenges that governments have to deal with around artificial intelligence are a global one. And I think that that was an important symbol that the UK government sought to underline. Now, there was a little bit of surprise in the opening when Secretary Raimondo of the United States announced the US would also initiate an AI Safety Institute right after the UK government had announced its. And so it did make me wonder why not just work together globally? But I guess they each want their own institute.

And those were perhaps the more concrete, tangible outcomes of the conference. Other than that, it was more a statement to look into the risks of AI safety more. And ahead of the conference, there had been a lot of discussion about whether the UK government was taking a too-narrow focus on AI safety, whether they had been leaning towards the effective altruism, existential risk camp too much. But in practice, the program gave a lot of room for discussions, and I thought that was really important, about the known and current day risks that AI presents. For example, to civil rights, when we think about discrimination, or to human rights, when we think about the threats to democracy, from both disinformation that generative AI can put on steroids, but also the real question of how to govern it at all when companies have so much power, when there's such a lack of transparency. So civil society leaders that were worried that they were not sufficiently heard in the program will hopefully feel a little bit more reassured because I spoke to a wide variety of civil society representatives that were a key part of the participants among government, business, and academic leaders.

So, when I talked to some of the first generation of thinkers and researchers in the field of AI, for them it was a significant moment because never had they thought that they would be part of a summit next to government leaders. I mean, for a long time they were mostly in their labs researching AI, and suddenly here they were being listened to at the podium alongside government representatives. So in a way, they were a little bit starstruck, and I thought that was funny because it was probably the same the other way around, certainly for the Prime Minister, who really looked like a proud student when he was interviewing Elon Musk. And that was another surprising development, that actually briefly, after the press conference had taken place, so a moment to shine in the media with the outcomes of the summit, Prime Minister Sunak decided to spend the airtime and certainly the social media coverage interviewing Elon Musk, who then predicted that AI would eradicate lots and lots of jobs. And remarkably, that was a topic that barely got mentioned at the summit, so maybe it was a good thing that it got part of the discussion after all, albeit in an unusual way.

More from GZERO Media

Elon Musk in an America Party hat.
Jess Frampton

Life comes at you fast. Only five weeks after vowing to step back from politics and a month after accusing President Donald Trump of being a pedophile, Elon Musk declared his intention to launch a new political party offering Americans an alternative to the Republicans and Democrats.

Chancellor of the Exchequer Rachel Reeves (right) crying as Prime Minister Sir Keir Starmer speaks during Prime Minister’s Questions in the House of Commons, London, United Kingdom, on July 2, 2025.
PA Images via Reuters Connect

UK Prime Minister Keir Starmer has struggled during his first year in office, an ominous sign for centrists in Western democracies.

- YouTube

“We wanted to be first with a flashy AI law,” says Kai Zenner, digital policy advisor in the European Parliament. Speaking with GZERO's Tony Maciulis at the 2025 AI for Good Summit in Geneva, Zenner explains the ambitions and the complications behind Europe’s landmark AI Act. Designed to create horizontal rules for all AI systems, the legislation aims to set global standards for safety, transparency, and oversight.

More than 60% of Walmart suppliers are small businesses.* Through a $350 billion investment in products made, grown, or assembled in the US, Walmart is helping these businesses expand, create jobs, and thrive. This effort is expected to support the creation of over 750,000 new American jobs by 2030, empowering companies like Athletic Brewing, Bon Appésweet, and Milo’s Tea to grow their teams, scale their production, and strengthen the communities they call home. Learn more about Walmart's commitment to US manufacturing. *See website for additional details.

Last month, Microsoft released its 2025 Responsible AI Transparency Report, demonstrating the company’s sustained commitment to earning trust at a pace that matches AI innovation. The report outlines new developments in how we build and deploy AI systems responsibly, how we support our customers, and how we learn, evolve, and grow. It highlights our strengthened incident response processes, enhanced risk assessments and mitigations, and proactive regulatory alignment. It also covers new tools and practices we offer our customers to support their AI risk governance efforts, as well as how we work with stakeholders around the world to work towards governance approaches that build trust. You can read the report here.