Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
How will Trump 2.0 impact AI?
In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, reflects on the five broad worries of the implication of the US election on artificial intelligence.
I spent the past week in the UK and Europe talking to a ton of people in the tech and democracy community. And of course, everybody just wanted to talk about the implications of the US election. It's safe to say that there's some pretty grave concerns, so I thought I could spend a few minutes, a few more than I usually do in these videos outlining the nature and type of these concerns, particularly amongst those who are concerned about the conflation of power between national governments and tech companies. In short, I heard five broad worries.
First, that we're going to see an unprecedented confluence of tech power and political power. In short, the influence of US tech money is going to be turbocharged. This, of course, always existed, but the two are now far more fully joined. This means that the interests of a small number of companies will be one in the same as the interests of the US government. Musk's interests, Tesla, Starlink, Neuralink are sure to be front and center. But also companies like Peter Thiel's Palantir and Palmer Luckey's Anduril are likely to get massive new defense contracts. And the crypto investments of some of Silicon Valley's biggest VCs are sure to be boosted and supported.
The flip side of this concentrated power to some of Silicon Valley's more libertarian conservatives is that tech companies on the wrong side of this realignment might find trouble. Musk adding Microsoft to his OpenAI lawsuit is an early tell of this. It'll be interesting to see where Zuckerberg and Bezos land given Trump's animosity to both.
Second, for democratic countries outside of the US, we're going to see a severe erosion of digital governance sovereignty. Simply put, it's going to become tremendously hard for countries to govern digital technologies including online platforms, AI, biotech, and crypto in ways that aren't aligned with US interests. The main lever that the Trump administration has to pull in this regard are bilateral trade agreements. These are going to be the big international sticks that are likely to overwhelm tech policy enforcement and new tech policy itself.
In Canada, for example, our News Media Bargaining Code, our Online Streaming Act and our Digital Services Tax are all already under fire by US trade disputes. When the USMCA is likely reopened, expect for these all to be on the table, and for the Canadian government, whoever is in power to fold, putting our reliance on US trade policy over our digital policy agenda. The broader spillover effect of this trade pressure is that countries are unlikely to develop new digital policies over the time of the Trump term. And for those policies that aren't repealed, enforcement of existing laws are likely to be slowed down or halted entirely. Europe, for example, is very unlikely to enforce Digital Services Act provisions against X.
Third, we're likely to see the silencing of US researchers and civil society groups working in the tech and democracy space. This will be done ironically in the name of free speech. Early attacks from Jim Jordan against disinformation researchers at US universities are only going to be ramped up. Marc Andreessen and Musk have both called for researchers working on election interference and misinformation to be prosecuted. And Trump has called for the suspension of nonprofit status to universities that have housed this work.
Faced with this kind of existential threat, universities are very likely to abandon these scholars and their labs entirely. Civil society groups working on these same issues are going to be targeted and many are sure to close under this pressure. It's simply tragic that efforts to better understand how information flows through our digital media ecosystem will be rendered impossible right at the time when they're needed the most. At a time when the health and the integrity of our ecosystem is under attack. All in the name of protecting free speech. this is Kafka-esque to say the least.
Fourth, and in part as a result of all of the above, internationally, we may see new political space opened up for conversations about national communications infrastructure. For decades, the driving force in the media policy debate has been one of globalization and the adoption of largely US-based platforms. This argument has provided real headwind to those who, like in previous generations, urged for the development of national capacities and have protectionist media policy. But I wonder how long the status quo is tenable in a world where the richest person in the world owns a major social media platform and dominates global low-orbit broadband.
Does a country like Canada, for example, want to hand our media infrastructure over to a single individual? One who has shown careless disregard for the one media platform he already controls and shapes? Will other countries follow America's lead if Trump sells US broadcast licenses and targets American journalism? Will killing Section 230 as Trump has said to want to do, and the limits that that will place on platforms moderating even the worst online abuse, further hasten the enforcement of national digital borders?
Fifth and finally, how things play out for AI is actually a bit of a mystery, but I'm sure will likely err on the side of unregulated markets. While Musk may have at once been a champion of AI regulation and had legitimate concerns about unchecked AGI, he now seems more concerned about the political bias of AI than about any sort of existential risk. As the head of a new government agency mandated to cut a third of the federal government budget, Musk is more likely to see AI as a cheap replacement for human labor than as a threat that needs a new agency to regulate.
In all of this, one thing is for certain, we really are in for a bumpy ride. For those that have been concerned about the relationship between political and tech power for well over a decade, our work has only just begun. I'm Taylor Owen and thanks for watching.
Gov. Gavin Newsom vetoes California’s AI safety bill
California Gov. Gavin Newsom on Sunday vetoed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, the AI safety bill passed by the state’s legislature in August.
Newsom has signed other AI-related bills into law, such as two recent measures protecting performers from AI deepfakes of their likenesses, but vetoed this one over concerns about the focus of the would-be law.
“By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology,” Newsom wrote in a letter on Sept. 29. “Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”
Democratic state Sen. Scott Wiener, who sponsored the bill, called the veto a “setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet.” Wiener hasn’t disclosed the next steps but vowed to continue pushing the envelope on AI regulation in the state. “California will continue to lead in that conversation — we are not going anywhere.”Global researchers sign new pact to make AI a “global public good”
A coalition of 21 influential artificial intelligence researchers and technology policy professionals signed a new agreement — the Manhattan Declaration on Inclusive Global Scientific Understanding of Artificial Intelligence — at the United Nations General Assembly in New York on Thursday, Sept. 26.
The declaration comes one week after the UN Secretary-General's High-Level Advisory Body on Artificial Intelligence (HLAB-AI) released its final report detailing seven recommendations for the UN to promote responsible and safe AI governance.
The Manhattan Declaration, which shares some signatories with the HLAB-AI group — including Google’s James Manyika, former Spanish government official Carme Artigas, and the Institute for Advanced Study’s Alondra Nelson — is a 10-point decree seeking to shape the contours of future AI development. It asks researchers to promote scientific cooperation among diverse and inclusive perspectives, conduct transparent research and risk assessment into AI models, and commit to responsible development and use, among other priorities. Nelson co-sponsored the declaration alongside University of Montreal professor Yoshua Bengio, and other signatories include officials from Alibaba, IBM, the Carnegie Endowment for International Peace, and the Center for AI Safety.
This is meant to foster AI as a “global public good,” as the signatories put it.
“We reaffirm our commitment to developing AI systems that are beneficial to humanity and acknowledge their pivotal role in attaining the global Sustainable Development Goals, such as improved health and education,” they wrote. “We emphasize that AI systems’ whole life cycle, including design, development, and deployment, must be aligned with core principles, safeguarding human rights, privacy, fairness, and dignity for all.”
That’s the crux of the declaration: Artificial intelligence isn’t just something to be controlled, but a technology that can — if harnessed in a way that respects human rights and privacy — help society solve its biggest problems. During a recent panel conversation led by Eurasia Group and GZERO Media founder and president Ian Bremmer (also a member of the HLAB-AI group), Google’s Manyika cited International Telecommunication Union research that found most of the UN’s Sustainable Development Goals could be achieved with help from AI.
While other AI treaties, agreements, and declarations — such as the UK’s Bletchley Declaration signed last year — include a combination of governments, tech companies, and academics, the Manhattan Declaration focuses on those actually researching artificial intelligence. “As AI scientists and technology-policy researchers, we advocate for a truly inclusive,
global approach to understanding AI’s capabilities, opportunities, and risks,” the letter concludes. “This is essential for shaping effective global governance of AI technologies. Together, we can ensure that the development of advanced AI systems benefits all of humanity.”
Ian Explains: Why is the UN's Summit of the Future so important?
Will the United Nations be able to adapt to address problems of the modern era, like artificial intelligence and the growing digital divide? On Ian Explains, Ian Bremmer looks at the challenges of multilateralism in an increasingly fragmented world.
In the face of crises like Russia’s invasion of Ukraine, the war in Gaza, and a rapidly warming planet, the UN’s goals of peace and security feel like a failure. But this year’s Summit of the Future during the General Assembly could be a turning point for the 78-year-old institution. UN members will vote on a Global Digital Compact to regulate AI, fight misinformation, and connect the whole world to the internet. Bremmer is one of 39 experts on the UN’s High-Level Advisory Body who've been studying the issue of global AI governance for the past year to better understand what that Compact should include. This week, the group released a report called “Governing AI for Humanity” with recommendations for creating a global regulatory framework for AI that is safe, inclusive, and equitable. Instead of a patchwork of regulation that’s happened so far, which has been concentrated in wealthy countries, can the UN lead the global AI conversation?
GZERO World with Ian Bremmer, the award-winning weekly global affairs series, airs nationwide on US public television stations (check local listings).
New digital episodes of GZERO World are released every Monday on YouTube. Don''t miss an episode: subscribe to GZERO's YouTube channel and turn on notifications (🔔).
Can the UN get the world to agree on AI safety?
Artificial intelligence has the power to transform our world, but it’s also an existential threat. There's been a patchwork of efforts to regulate AI, but they’ve been concentrated in wealthy countries, while those in the Global South, who stand to benefit most from AI’s potential, have been left out. Can the United Nations come together at this year’s General Assembly to agree on standards for a safe, equitable, and inclusive AI future?
Tomorrow, the UN’s High Level Advisory Body on AI will release a report called “Governing AI for Humanity,” with recommendations for global AI governance that will be a roadmap for safeguarding our digital future and making sure AI will truly benefit everyone in the world. Ian Bremmer is one of the 39 experts on the AI Advisory Body, and he sat down with UN Secretary-General António Guterres for an exclusive GZERO World interview on the sidelines of the General Assembly to discuss the report and why Guterres believes the UN is the only organization capable of creating a truly global, inclusive framework for AI.
“The United Nations has one important characteristic: its legitimacy. It's a platform where everybody can be together,” Guterres says, “Others have the power, others have the money, but not the legitimacy or the convening power the UN has.”
The exclusive conversation begins airing nationally on GZERO World with Ian Bremmer on public television this Friday, Sept. 20. Everything you need to know about Advisory Body’s final report will be dissected and analyzed in the GZERO Daily, landing in inboxes tomorrow (Sept. 19) at 7 am. Sign up here.
GZERO World with Ian Bremmer, the award-winning weekly global affairs series, airs nationwide on US public television stations (check local listings).
New digital episodes of GZERO World are released every Monday on YouTube. Don''t miss an episode: subscribe to GZERO's YouTube channel and turn on notifications (🔔).
- Peace in Ukraine is world's priority, says UN chief António Guterres ›
- The White House sees AI clash with climate goals ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity ›
- How is AI shaping culture in the art world? ›
- Ian Explains: Why is the UN's Summit of the Future so important? - GZERO Media ›
- Can we use AI to secure the world's digital future? - GZERO Media ›
AI's evolving role in society
In a world where humanity put a man on the moon before adding wheels to luggage, the rapid advancements in AI seem almost paradoxical. Microsoft’s chief data scientist Juan Lavista, in a recent Global Stage conversation with Tony Maciulis, highlighted this contrast to emphasize how swiftly AI has evolved, particularly in the last few years.
Lavista discussed the impact of generative AI, which allows users to create hyper-realistic images, videos, and audio. This capability is both impressive and concerning, as demonstrated in their “Real or Not?” Quiz, where even experts struggle to distinguish between AI-generated and real images.
While AI offers incredible tools for good, Lavista warns of the potential risks, particularly with deepfakes and other deceptive technologies. He stresses the importance of public education and the need for AI models to be trained on diverse data to avoid biases.
As AI continues to evolve, its impact on daily life will only grow. Lavista predicts more accurate and less error-prone models in the future, underscoring the need to balance innovation with responsible use.
What Sam Altman wants from Washington
Altman’s argument is not new, but his policy prescriptions are more detailed than before. In addition to the general undertone that Washington should trust the AI industry to regulate itself,the OpenAI chief calls for improved cybersecurity measures, investment in infrastructure, and new models for global AI governance. He wants additional security and funding for data centers, for instance, and says doing this will create jobs around the country. He also urges the use of additional export controls and foreign investment rules to keep the AI industry in US control, and outlines potentially global governance structures to oversee the development of AI.
We’ve heard Altman’s call for self-regulation and industry-friendly policies before — he has become something of a chief lobbyist for the AI industry over the past two years. His framing of AI development as a national security imperative echoes a familiar strategy used by emerging tech sectors to garner government support and funding.
Scott Bade, a senior geotechnology analyst at Eurasia Group, says Altman wants to “position the AI sector as a national champion. Every emerging tech sector is doing this:
‘We’re essential to the future of US power [and] competitiveness [and] innovation so therefore [the US government] should subsidize us.’”
Moreover, Altman’s op-ed has notable omissions. AI researcher Melanie Mitchell, a professor at the Santa Fe Institute,points out on X that there’s no mention of the negative effects on the climate, seeing that AI requires immense amounts of electricity. She also highlights a crucial irony in Altman’s insistence to safeguard intellectual property: “He’s worrying about hackers stealing AI training data from AI companies like OpenAI, not about AI companies like OpenAI stealing training data from the people who created it!”
The timing of Altman’s op-ed is also intriguing. It comes as the US political landscape is shifting, with the upcoming presidential election no longer seen as a sure win for Republicans. The race between Kamala Harris and Donald Trump is now considered a toss-up, according to the latest polling since Harris entered the race a week and a half ago. This changing dynamic may explain why Altman is putting forward more concrete policy proposals now rather than counting on a more laissez-faire approach to come into power in January.
Harris is both comfortable with taking on Silicon Valley and advocating for US AI policy on a global stage, as we wrote in last week’s edition. Altman will want to make sure his voice — perhaps the loudest industry voice — gets heard no matter who is elected in November.AI & election security
With an estimated 4 billion people—almost half the world’s population—set to vote or have already voted in the 2024 elections, AI's influence has been minimal so far, but its potential impact looms large. Ginny Badanes, general manager of Democracy Forward at Microsoft, explained that while AI-driven disruptions like deep fake videos and robocalls haven't altered results yet, they have undermined public trust.
“I think people are becoming more and more aware of the fact that AI could be a disruptor in the elections, which I actually think is a positive thing. However, it does have the downside effect of people are starting to question what they're looking at and wondering if they can trust what they see.”
Badanes sat with GZERO’s Tony Maciulis to discuss how AI has yet to change election outcomes. Continuous efforts from both the tech industry and governments are crucial to safeguarding future elections.