Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration.
OpenAI strikes a scientific partnership with US National Labs
OpenAI said that its models will be used to accelerate scientific research into disease prevention, cybersecurity, mathematics, and physics.
The agreement comes just days after OpenAI announced ChatGPT Gov, a version of the popular chatbot specifically designed for government personnel. The company is the face of the Project Stargate data center and AI infrastructure initiative heralded by President Donald Trump in January.Big Tech under Trump 2.0
The tech landscape has shifted dramatically since Donald Trump’s first term in office: AI is booming, Meta and Google are fighting antitrust battles, and Elon Musk turned Twitter into “X.” In anticipation of Trump 2.0, social media platforms like Facebook and Instagram have announced they’ll prioritize free speech over content moderation and fact-checking. So what’s in store for the tech industry in 2025? On GZERO World, Atlantic CEO Nicholas Thompson joins Ian Bremmer on GZERO World to discuss recent shifts at Big Tech companies and the intersection of technology, media, and politics. What does the tech industry stand to gain–or lose–from another Trump presidency? Will Elon Musk have a positive impact on the future of US tech policy? And how will things like the proliferation of bots and the fragmentation of social media affect political discourse online?
“Social media platforms, in general, are shifting to the right, and they are less important than they were five years ago. They’re bifurcated, dispersed, conversations happen across platforms,” Thompson explains, “As communities split, there will be less and less one town square where people discuss issues of consequence.”
GZERO World with Ian Bremmer, the award-winning weekly global affairs series, airs nationwide on US public television stations (check local listings).
New digital episodes of GZERO World are released every Monday on YouTube. Don't miss an episode: subscribe to GZERO's YouTube channel and turn on notifications (🔔).
Big Tech and Trump 2.0: Nicholas Thompson on AI, Media, and Policy
Listen: What will the future of tech policy look like in a second Trump administration? And how will changes in the tech world—everything from the proliferation of AI and bots to the fragmentation of social media—impact how people talk, interact, and find information online? On the GZERO World Podcast, Nicholas Thompson, CEO of The Atlantic, joins Ian Bremmer to discuss the intersection of technology, media, and politics as Donald Trump prepares to return to the White House. Trump had a contentious relationship with the tech industry in his first term, but this time around, tech leaders are optimistic Trump 2.0 will be good for business, buoyed by hopes of loosening AI regulations, a crypto boom, and a more business-friendly administration. What does Big Tech stand to gain–or lose–from a second Trump presidency? Will Elon Musk help usher US tech policy into a new era, or will he create more chaos in the White House? And how concerned should we be about the dangers of AI-generated content online? Thompson and Bremmer break down the big changes in Big Tech and where the industry goes from here.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
What does Big Tech want from Trump?
What does Big Tech want from Donald Trump? Trump had a contentious relationship with the industry in his first administration. But in 2025, Silicon Valley is recalibrating. On Ian Explains, Ian Bremmer looks at the parade of tech leaders who have visited with Trump since his election win, including Amazon’s Jeff Bezos, Apple’s Tim Cook, OpenAI’s Sam Altman, and moves like Meta’s recent announcement it would scrap its fact-checking program, all to get on President-elect Trump’s good side as he prepares to return to office. So what does the industry stand to gain—or lose—from a second Trump term? Loosening AI and crypto regulation and a business-friendly White House are high on the wish list. However, blanket tariffs on China and Trump's grudge against Section 230 could mean that, despite the optimism, Trump 2.0 may not lead to the big windfall Big Tech hopes for.
GZERO World with Ian Bremmer, the award-winning weekly global affairs series, airs nationwide on US public television stations (check local listings).
New digital episodes of GZERO World are released every Monday on YouTube. Don't miss an episode: subscribe to GZERO's YouTube channel and turn on notifications (🔔).
In this photo illustration, Worldcoin logo is seen on a smartphone screen.
What’s up with Worldcoin?
Sam Altman wants to scan the eyeballs of every single person on Earth with an orb-shaped scanner and then pay them with cryptocurrency. This eye-raising proposition is called Worldcoin — also the name of the crypto coin in question — and seeks to solve a problem straight from science fiction: In the future, what if we can’t tell humans and robots apart?
Perhaps unsurprisingly, this strange initiative has received pushback from governments around the world concerned about the biometric privacy of their citizens. Its operations were shut down in Spain and Portugal in March and in Hong Kong in May. It was investigated by Kenyan authorities who later dropped the probe.
Worldcoin’s ability to operate in Europe will be determined in the coming weeks when the Bavarian data protection authority is set to rule on whether it’s compliant with GDPR, the European data privacy law.
The company says that about 6.5 million people worldwide have gotten scanned. That includes people in the US, where there are five locations where people can visit an orb and get their eyeball scanned: Atlanta, Los Angeles, New York, Palo Alto, and San Francisco. It has not been widely scrutinized by US regulators, but that could change if Europe takes a strong position on Altman’s side hustle.What Sam Altman wants from Washington
Altman’s argument is not new, but his policy prescriptions are more detailed than before. In addition to the general undertone that Washington should trust the AI industry to regulate itself,the OpenAI chief calls for improved cybersecurity measures, investment in infrastructure, and new models for global AI governance. He wants additional security and funding for data centers, for instance, and says doing this will create jobs around the country. He also urges the use of additional export controls and foreign investment rules to keep the AI industry in US control, and outlines potentially global governance structures to oversee the development of AI.
We’ve heard Altman’s call for self-regulation and industry-friendly policies before — he has become something of a chief lobbyist for the AI industry over the past two years. His framing of AI development as a national security imperative echoes a familiar strategy used by emerging tech sectors to garner government support and funding.
Scott Bade, a senior geotechnology analyst at Eurasia Group, says Altman wants to “position the AI sector as a national champion. Every emerging tech sector is doing this:
‘We’re essential to the future of US power [and] competitiveness [and] innovation so therefore [the US government] should subsidize us.’”
Moreover, Altman’s op-ed has notable omissions. AI researcher Melanie Mitchell, a professor at the Santa Fe Institute,points out on X that there’s no mention of the negative effects on the climate, seeing that AI requires immense amounts of electricity. She also highlights a crucial irony in Altman’s insistence to safeguard intellectual property: “He’s worrying about hackers stealing AI training data from AI companies like OpenAI, not about AI companies like OpenAI stealing training data from the people who created it!”
The timing of Altman’s op-ed is also intriguing. It comes as the US political landscape is shifting, with the upcoming presidential election no longer seen as a sure win for Republicans. The race between Kamala Harris and Donald Trump is now considered a toss-up, according to the latest polling since Harris entered the race a week and a half ago. This changing dynamic may explain why Altman is putting forward more concrete policy proposals now rather than counting on a more laissez-faire approach to come into power in January.
Harris is both comfortable with taking on Silicon Valley and advocating for US AI policy on a global stage, as we wrote in last week’s edition. Altman will want to make sure his voice — perhaps the loudest industry voice — gets heard no matter who is elected in November.FILE PHOTO: The 76th Cannes Film Festival - Press conference for the film "Asteroid City" in competition - Cannes, France, May 24, 2023. Cast member Scarlett Johansson attends.
Hard Numbers: Scarlett Johansen’s voice on ChatGPT, Sony Music’s warning, Energy drain, Stability AI’s instability, Sharing the love — and the GPUs
2: Film star Scarlett Johanssonturned down OpenAI’s Sam Altman twice when he asked to use her voice for ChatGPT’s speech applications. She said no, but OpenAI has released a voice called “Sky” that sounds similar to Johansson. The actress (well, at least her voice) starred in the 2013 film “Her”— which Altman has called his favorite movie — portraying a disembodied AI that the protagonist becomes infatuated with. OpenAI says it hired another actress to voice “Sky,” but the company has now removed the voice “out of respect for Ms. Johansson.”
700: Sony Music sent letters to 700 AI developers and music streaming companies telling them it’s “opting out” of letting them use its content for training models. That includes musical compositions as well as lyrics, recordings, music videos, and album artwork. Last year, AI-generated songs featuring the fake voices of Drake and The Weeknd became a viral smash on social media — but music publishers aren’t in the habit of licensing their assets for free.
30: Microsoft reported that between 2020 and 2023 its energy emissions jumped 30%, a sign of the huge toll that artificial intelligence could take on the planet. Microsoft wants to be carbon negative by 2030, but its generative AI initiatives have hampered progress toward that goal.
1 billion: Amid a cash crunch, Stability AI is reportedly exploring a sale. The startup, which makes the Stable Diffusion image generator, was valued at $1 billion in 2022. The biggest question is who can buy it? The Biden administration has chilled the merger and acquisition market, taking an especially aggressive approach to litigating alleged antitrust allegations throughout Silicon Valley.Chuck Schumer’s light-touch plan for AI
Over the past year, Senate Majority Leader Chuck Schumer (D-NY) has led the so-called AI Gang, a group of senators eager to study the effects of artificial intelligence on society and curb the threats it poses through regulation. But calling this group a gang implies a certain level of toughness that was nowhere to be found in the roadmap it unveiled on May 15.
Announcing the 31-page roadmap, a bipartisan set of policy priorities for Congress, Schumer bragged of “months of discussion,” “hundreds of meetings,” and “nine first-of-their-kind AI Insight Forums,” including sessions with OpenAI’s Sam Altman and Meta’s Mark Zuckerberg.
What he delivered, however, was more of a spending plan than a vision for real regulation – the policy proposals were limited, and the approach was hands-off. The roadmap called for $32 billion over the next three years for artificial intelligence-related spending for research and innovation. It offered suggestions, such as a federal data privacy law, legislation to curb deepfakes in elections, and a ban on “social scoring” like the social credit system that China has tested.
Civil society groups aren’t pleased
The long list of proposals is “no substitute for enforceable law – and these companies certainly know the difference, especially when the window to see anything into legislation is swiftly closing,” the AI Now Institute’s Amba Kak and Sarah Myers West wrote in a statement. Maya Wiley, CEO of the Leadership Conference on Civil and Human Rights, wrote that “the framework’s focus on promoting innovation and industry overshadows the real-world harms that could result from AI systems.”
Ronan Murphy of the Center for European Policy Analysis wrote that the gap between the US and EU approaches to AI could not be more stark. “US lawmakers believe it is premature to restrain fast-moving AI innovation,” he wrote. “In contrast, the EU’s AI Act bans facial recognition applications and tools that exhibit racial or other discrimination.”
Former White House technology advisor Suresh Venkatasubramaniantweeted that the proposal felt so unoriginal and recycled that it might have been written by ChatGPT.
An AI law is unlikely this year
Adam Conner, vice president of tech policy at the Center for American Progress, said that while the roadmap has some areas of substance, such as urging a federal data privacy law, “most sections are light on details.” He called the $32 billion spending proposal a “detailed wish list” for upcoming funding bills.
It was a thin result for something that took so long to cook up, he said, and “leaves little time on the calendar this year for substantive AI legislation, except for the funding bills Congress must pass this year and possibly the recently introduced bipartisan bicameral American Privacy Rights Act data privacy bill.” This means any other AI legislation will likely have to wait until next year. “Whether that was the plan all along is an open question,” Conner added.
Danny Hague, assistant director of Georgetown University’s Center for Security and Emerging Technology, agreed that it’s unlikely anything comprehensive gets passed this year. But he doesn’t necessarily see the report as a sign that the US will be hands-off with legislation. He said the Senate Working Group likely realizes that “time is limited,” and there are already “structures in place — regulatory agencies and the congressional committees that oversee them — to act on AI quickly.”
Jon Lieber, managing director for the United States for Eurasia Group, said he didn’t understand why an AI Gang was necessary at all. “I’m confused why Schumer felt the need to do something here,” he said. “This process should have been handled by a senate committee, not the leaders office.
Such a soft line from Congress means that until further notice, President Joe Biden — who has issued an executive order, export controls, and CHIPS Act funding to create jobs, secure tech infrastructure, and directed his agencies to get up to speed on AI — might just be the AI regulator in chief.