Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Gov. Gavin Newsom vetoes California’s AI safety bill
California Gov. Gavin Newsom on Sunday vetoed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, the AI safety bill passed by the state’s legislature in August.
Newsom has signed other AI-related bills into law, such as two recent measures protecting performers from AI deepfakes of their likenesses, but vetoed this one over concerns about the focus of the would-be law.
“By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology,” Newsom wrote in a letter on Sept. 29. “Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”
Democratic state Sen. Scott Wiener, who sponsored the bill, called the veto a “setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet.” Wiener hasn’t disclosed the next steps but vowed to continue pushing the envelope on AI regulation in the state. “California will continue to lead in that conversation — we are not going anywhere.”Global researchers sign new pact to make AI a “global public good”
A coalition of 21 influential artificial intelligence researchers and technology policy professionals signed a new agreement — the Manhattan Declaration on Inclusive Global Scientific Understanding of Artificial Intelligence — at the United Nations General Assembly in New York on Thursday, Sept. 26.
The declaration comes one week after the UN Secretary-General's High-Level Advisory Body on Artificial Intelligence (HLAB-AI) released its final report detailing seven recommendations for the UN to promote responsible and safe AI governance.
The Manhattan Declaration, which shares some signatories with the HLAB-AI group — including Google’s James Manyika, former Spanish government official Carme Artigas, and the Institute for Advanced Study’s Alondra Nelson — is a 10-point decree seeking to shape the contours of future AI development. It asks researchers to promote scientific cooperation among diverse and inclusive perspectives, conduct transparent research and risk assessment into AI models, and commit to responsible development and use, among other priorities. Nelson co-sponsored the declaration alongside University of Montreal professor Yoshua Bengio, and other signatories include officials from Alibaba, IBM, the Carnegie Endowment for International Peace, and the Center for AI Safety.
This is meant to foster AI as a “global public good,” as the signatories put it.
“We reaffirm our commitment to developing AI systems that are beneficial to humanity and acknowledge their pivotal role in attaining the global Sustainable Development Goals, such as improved health and education,” they wrote. “We emphasize that AI systems’ whole life cycle, including design, development, and deployment, must be aligned with core principles, safeguarding human rights, privacy, fairness, and dignity for all.”
That’s the crux of the declaration: Artificial intelligence isn’t just something to be controlled, but a technology that can — if harnessed in a way that respects human rights and privacy — help society solve its biggest problems. During a recent panel conversation led by Eurasia Group and GZERO Media founder and president Ian Bremmer (also a member of the HLAB-AI group), Google’s Manyika cited International Telecommunication Union research that found most of the UN’s Sustainable Development Goals could be achieved with help from AI.
While other AI treaties, agreements, and declarations — such as the UK’s Bletchley Declaration signed last year — include a combination of governments, tech companies, and academics, the Manhattan Declaration focuses on those actually researching artificial intelligence. “As AI scientists and technology-policy researchers, we advocate for a truly inclusive,
global approach to understanding AI’s capabilities, opportunities, and risks,” the letter concludes. “This is essential for shaping effective global governance of AI technologies. Together, we can ensure that the development of advanced AI systems benefits all of humanity.”
Ian Explains: Why is the UN's Summit of the Future so important?
Will the United Nations be able to adapt to address problems of the modern era, like artificial intelligence and the growing digital divide? On Ian Explains, Ian Bremmer looks at the challenges of multilateralism in an increasingly fragmented world.
In the face of crises like Russia’s invasion of Ukraine, the war in Gaza, and a rapidly warming planet, the UN’s goals of peace and security feel like a failure. But this year’s Summit of the Future during the General Assembly could be a turning point for the 78-year-old institution. UN members will vote on a Global Digital Compact to regulate AI, fight misinformation, and connect the whole world to the internet. Bremmer is one of 39 experts on the UN’s High-Level Advisory Body who've been studying the issue of global AI governance for the past year to better understand what that Compact should include. This week, the group released a report called “Governing AI for Humanity” with recommendations for creating a global regulatory framework for AI that is safe, inclusive, and equitable. Instead of a patchwork of regulation that’s happened so far, which has been concentrated in wealthy countries, can the UN lead the global AI conversation?
GZERO World with Ian Bremmer, the award-winning weekly global affairs series, airs nationwide on US public television stations (check local listings).
New digital episodes of GZERO World are released every Monday on YouTube. Don''t miss an episode: subscribe to GZERO's YouTube channel and turn on notifications (🔔).
Can the UN get the world to agree on AI safety?
Artificial intelligence has the power to transform our world, but it’s also an existential threat. There's been a patchwork of efforts to regulate AI, but they’ve been concentrated in wealthy countries, while those in the Global South, who stand to benefit most from AI’s potential, have been left out. Can the United Nations come together at this year’s General Assembly to agree on standards for a safe, equitable, and inclusive AI future?
Tomorrow, the UN’s High Level Advisory Body on AI will release a report called “Governing AI for Humanity,” with recommendations for global AI governance that will be a roadmap for safeguarding our digital future and making sure AI will truly benefit everyone in the world. Ian Bremmer is one of the 39 experts on the AI Advisory Body, and he sat down with UN Secretary-General António Guterres for an exclusive GZERO World interview on the sidelines of the General Assembly to discuss the report and why Guterres believes the UN is the only organization capable of creating a truly global, inclusive framework for AI.
“The United Nations has one important characteristic: its legitimacy. It's a platform where everybody can be together,” Guterres says, “Others have the power, others have the money, but not the legitimacy or the convening power the UN has.”
The exclusive conversation begins airing nationally on GZERO World with Ian Bremmer on public television this Friday, Sept. 20. Everything you need to know about Advisory Body’s final report will be dissected and analyzed in the GZERO Daily, landing in inboxes tomorrow (Sept. 19) at 7 am. Sign up here.
GZERO World with Ian Bremmer, the award-winning weekly global affairs series, airs nationwide on US public television stations (check local listings).
New digital episodes of GZERO World are released every Monday on YouTube. Don''t miss an episode: subscribe to GZERO's YouTube channel and turn on notifications (🔔).
- Peace in Ukraine is world's priority, says UN chief António Guterres ›
- The White House sees AI clash with climate goals ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity ›
- How is AI shaping culture in the art world? ›
- Ian Explains: Why is the UN's Summit of the Future so important? - GZERO Media ›
- Can we use AI to secure the world's digital future? - GZERO Media ›
AI's evolving role in society
In a world where humanity put a man on the moon before adding wheels to luggage, the rapid advancements in AI seem almost paradoxical. Microsoft’s chief data scientist Juan Lavista, in a recent Global Stage conversation with Tony Maciulis, highlighted this contrast to emphasize how swiftly AI has evolved, particularly in the last few years.
Lavista discussed the impact of generative AI, which allows users to create hyper-realistic images, videos, and audio. This capability is both impressive and concerning, as demonstrated in their “Real or Not?” Quiz, where even experts struggle to distinguish between AI-generated and real images.
While AI offers incredible tools for good, Lavista warns of the potential risks, particularly with deepfakes and other deceptive technologies. He stresses the importance of public education and the need for AI models to be trained on diverse data to avoid biases.
As AI continues to evolve, its impact on daily life will only grow. Lavista predicts more accurate and less error-prone models in the future, underscoring the need to balance innovation with responsible use.
What Sam Altman wants from Washington
Altman’s argument is not new, but his policy prescriptions are more detailed than before. In addition to the general undertone that Washington should trust the AI industry to regulate itself,the OpenAI chief calls for improved cybersecurity measures, investment in infrastructure, and new models for global AI governance. He wants additional security and funding for data centers, for instance, and says doing this will create jobs around the country. He also urges the use of additional export controls and foreign investment rules to keep the AI industry in US control, and outlines potentially global governance structures to oversee the development of AI.
We’ve heard Altman’s call for self-regulation and industry-friendly policies before — he has become something of a chief lobbyist for the AI industry over the past two years. His framing of AI development as a national security imperative echoes a familiar strategy used by emerging tech sectors to garner government support and funding.
Scott Bade, a senior geotechnology analyst at Eurasia Group, says Altman wants to “position the AI sector as a national champion. Every emerging tech sector is doing this:
‘We’re essential to the future of US power [and] competitiveness [and] innovation so therefore [the US government] should subsidize us.’”
Moreover, Altman’s op-ed has notable omissions. AI researcher Melanie Mitchell, a professor at the Santa Fe Institute,points out on X that there’s no mention of the negative effects on the climate, seeing that AI requires immense amounts of electricity. She also highlights a crucial irony in Altman’s insistence to safeguard intellectual property: “He’s worrying about hackers stealing AI training data from AI companies like OpenAI, not about AI companies like OpenAI stealing training data from the people who created it!”
The timing of Altman’s op-ed is also intriguing. It comes as the US political landscape is shifting, with the upcoming presidential election no longer seen as a sure win for Republicans. The race between Kamala Harris and Donald Trump is now considered a toss-up, according to the latest polling since Harris entered the race a week and a half ago. This changing dynamic may explain why Altman is putting forward more concrete policy proposals now rather than counting on a more laissez-faire approach to come into power in January.
Harris is both comfortable with taking on Silicon Valley and advocating for US AI policy on a global stage, as we wrote in last week’s edition. Altman will want to make sure his voice — perhaps the loudest industry voice — gets heard no matter who is elected in November.AI & election security
With an estimated 4 billion people—almost half the world’s population—set to vote or have already voted in the 2024 elections, AI's influence has been minimal so far, but its potential impact looms large. Ginny Badanes, general manager of Democracy Forward at Microsoft, explained that while AI-driven disruptions like deep fake videos and robocalls haven't altered results yet, they have undermined public trust.
“I think people are becoming more and more aware of the fact that AI could be a disruptor in the elections, which I actually think is a positive thing. However, it does have the downside effect of people are starting to question what they're looking at and wondering if they can trust what they see.”
Badanes sat with GZERO’s Tony Maciulis to discuss how AI has yet to change election outcomes. Continuous efforts from both the tech industry and governments are crucial to safeguarding future elections.
Come inside the tech lab making accessibility fun
It all started with gaming, modifications for joysticks, and controllers that allow disabled veterans to once again play their favorite video games. Now, Microsoft’s Inclusive Tech Lab is a haven of innovation and creativity, featuring toys and tools created by and for the disability community. Come along as Program Manager Solomon Romney takes GZERO on an exclusive tour of the lab making accessibility awesome.
Watch more interviews from Global Stage.