Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Inside Elon Musk and DOGE's "revolutionary" push to reshape Washington, with WIRED's Katie Drummond
Listen: Elon Musk, the world’s richest man, made his fortune-breaking industries—space, cars, social media—and is now trying to break the government… in the name of fixing it. But what happens when Silicon Valley’s ‘move fast and break things’ ethos collides with the machinery of federal bureaucracy? On the GZERO World Podcast, Ian Bremmersits down with WIRED Global Editorial Director Katie Drummond to unpack the implications of Musk’s deepening role in the Trump administration and what’s really behind his push into politics. In a few short weeks, Musk’s Department of Government Efficiency has dramatically reshaped the government, slashing budgets, eliminating thousands of jobs, and centralizing vast amounts of government data, all in the name of efficiency. Is this a necessary shake-up or a dangerous consolidation of power? Drummond and Bremmer dig into the political motives behind DOGE, President Trump’s close relationship with Musk, and how the tech billionaire’s far-right leanings could shape the future of US policy. Can Elon's vision of innovation bring efficiency to Washington, or will it just inject more chaos into the system?
Inside the fight to shape Trump’s AI policy
The Trump White House has received thousands of recommendations for its upcoming AI Action Plan, a roadmap that will define how the US government will approach artificial intelligence for the remainder of the administration.
The plan was first mandated by President Donald Trump in his January executive order that scrapped the AI rules of his predecessor, Joe Biden. While Silicon Valley tech giants have put forth their plans for industry-friendly regulation and deregulation, many civil society groups have taken the opportunity to warn of the dangers of AI. Ahead of the March 15 deadline set by the White House to answer a request for information, Google and OpenAI were some of the biggest names to propose measures they’d like to see in place at the federal level.
What Silicon Valley wants
OpenAI urged the federal government to allow AI companies to train their models’ copyrighted material without restriction, shield them from state-level regulations, and implement additional export controls against Chinese competitors.
“While America maintains a lead on AI today, DeepSeek shows that our lead is not wide and is narrowing. The AI Action Plan should ensure that American-led AI prevails over CCP-led AI, securing both American leadership on AI and a brighter future for all Americans,” OpenAI’s head of global policy, Christopher Lehane, wrote in a memo. Google meanwhile called for weakened copyright restrictions on training AI and “balanced” export controls that would protect national security without strangling American companies.
Xiaomeng Lu, the director of geo-technology at the Eurasia Group, said invoking Chinese AI models was a “competitive play” from OpenAI.
“OpenAI is threatened by DeepSeek and other open-source models that put pressure on the company to lower prices and innovate better,” she said. “Sam [Altman] likely wants the US government’s aid in wider access to data, export restrictions, and government procurement to boost its own market position.”
Laura Caroli, a senior fellow of the Wadhwani AI Center at the Center for Strategic and International Studies, agreed. “Despite DeepSeek’s problems in safety and privacy, the real point is … OpenAI feels threatened by DeepSeek’s ability to build powerful open-source models at lower costs,” she said. “They use the national security narrative to advance their commercial goals.”
Civil liberties and national security concerns
Civil liberties groups painted a more dire picture of what could happen if Trump pursues an AI strategy that does not attempt to place guardrails on the development of this technology.
“Automating important decisions about people is reckless and dangerous,” said Corynne McSherry, legal director at the Electronic Frontier Foundation. The group submitted its own response to the government on March 13. McSherry told GZERO it criticized tech companies for ignoring “serious and well-documented risks of using AI tools for consequential decisions about housing, employment, immigration, access to benefits” and more.
There are also important national security measures that might be ignored by the Trump administration if it removes all regulations governing AI.
“I agree that maintaining US leadership in AI is a national security imperative,” said Cole McFaul, research analyst at Georgetown University's Center for Security and Emerging Technology, which also submitted a response that focused on securing American leadership in AI while mitigating risks and better competing with China. “OpenAI’s RFI response includes a call to ban the use of PRC-trained models. I agree with a lot of what they proposed, but I worry that some of Washington’s most influential AI policy advocates are also those with the most to gain.”
But even with corporate influence in Washington, it’s a confusing time to try to navigate the AI landscape with so many nascent regulations in Europe, plus changing signals from the White House.
Mia Rendar, an attorney at the law firm Pillsbury Winthrop Shaw Pittman, noted that while the government is figuring out how to regulate this emerging technology, businesses are caught in the middle. “We’re at a similar inflection point that we were when GDPR was being put in place,” Rendar said, referring to the European privacy law. “If you’re a multinational company, AI laws are going to follow a similar model – you’ll need to set and maintain standards that meet the most stringent set of obligations.”
How influential is Silicon Valley?
With close allies like Tesla CEO Elon Musk and investor David Sacks in Trump’s orbit, the tech sector’s influence has been hard to ignore. Thus, the final AI Action Plan, expected in July, will show whether Silicon Valley really has pull with the Trump administration — and, specifically, which firms have what kind of sway.
While the administration has already signaled that it will be hands-off in regulating AI, it’s unclear what path Trump will take in helping American-made AI companies, sticking it to China, and signaling to the rest of the world that the United States is, in fact, the global leader on AI.
Capitol Hill, Washington, D.C.
Silicon Valley and Washington push back against Europe
That display came after Meta and Google publicly criticized Europe’s new code of practice for general AI models, part of the EU’s AI Act earlier this month. Meta’s Joel Kaplan said that the rules impose “unworkable and technically infeasible requirements” on developers, while Google’s Kent Walker called them a “step in the wrong direction.”
On Feb. 11, US Vice President JD Vance told attendees at the AI Action Summit in Paris, France, that Europe should pursue regulations that don’t “strangle” the AI industry.
The overseas criticism from Washington and Silicon Valley may be having an impact. The European Commission recently withdrew its planned AI Liability Directive, designed to make tech companies pay for the harm caused by their AI systems. European official Henna Virkkunen said that the Commission is softening its rules not because of pressure from US officials, but rather to spur innovation and investment in Europe.
But these days, Washington and Silicon Valley are often speaking with the same voice.
David Sacks, former CEO of Zenefits, is seen here speaking at a 2016 TechCrunch Disrupt in San Francisco, California.
Meet David Sacks, the new White House AI czar
Sacks will not be full-time in the role and will stay at his fund, Craft Ventures, but he will assume a lofty portfolio that covers two of the hottest topics in tech policy: artificial intelligence and crypto. President Joe Biden spent the second half of his term getting his departments and agencies to develop rulemaking on AI — and figuring out how to adopt the technology for their own purposes.
Meanwhile, Gary Gensler, Biden’s Securities and Exchange Commission chair, has ramped up enforcement of fraud in the crypto industry — though he’s stopped short of any sweeping shutdown of the coins. Trump’s election, promises of deregulation, and his personal interest in crypto have led to the skyrocketing price of many cryptocurrencies, including bitcoin.
Trump will take a deregulatory approach to both artificial intelligence and crypto, but it’s up to Sacks to coordinate across a sprawling bureaucracy and determine how to execute that goal. As a Silicon Valley stalwart, and one who has not severed business ties to join the government, Sacks will soon be the face of Trump’s tech policy, likely a heavy hand rolling back rules and regulations started under Biden.
California wants to prevent an AI “catastrophe”
The Golden State may be close to passing AI safety regulation — and Silicon Valley isn’t pleased.
The proposed AI safety bill, SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, aims to establish “common sense safety standards” for powerful AI models.
The bill would require companies developing high-powered AI models to implement safety measures, conduct rigorous testing, and provide assurances against "critical harms," such as the use of models to execute mass-casualty events and cyberattacks that lead to $500 million in damages. It warns that the California attorney general can take civil action against violators, though rules would only apply to models that cost $100 million to train and pass a certain computing threshold.
A group of prominent academics, including AI pioneers Geoffrey Hinton and Yoshua Bengio,published a letter last week to California’s political leaders supporting the bill. “There are fewer regulations on AI systems that could pose catastrophic risks than on sandwich shops or hairdressers,“ they wrote, saying that regulations are necessary not only to rein in the potential harms of AI but also to restore public confidence in the emerging technology.
Critics, including many in Silicon Valley, argue the bill is overly vague and could stifle innovation. In June, the influential startup incubator Y Combinator, wrote a public letter outlining its concerns. It said that liability should lie with those who abuse AI tools, not developers, that the threshold for inclusion under the law is arbitrary, and that a requirement that developers include a “kill switch” allowing them to turn off the model would be a “de facto ban on open-source AI development.”
Steven Tiell, a nonresident senior fellow with the Atlantic Council's GeoTech Center, thinks the bill is “a good start” but points to “some pitfalls.” He appreciates that it only applies to the largest models but has concerns about the bill’s approach to “full shutdown” capabilities – aka the kill switch.
“The way SB 1047 talks about the ability for a ‘full shutdown’ of a model – and derivative models – seems to assume foundation models would have some ability to control derivative models,” Tiell says. He warned this could “materially impact the commercial viability of foundation models across wide swaths of the industry.”
Hayley Tsukayama, associate director of legislative activism at the Electronic Frontier Foundation, acknowledges the tech industry’s concerns. “AI is changing rapidly, so it’s hard to know whether — even with the flexibility in the bill — the regulation it’s proposing will age well with the industry,” she says.
“The whole idea of open-source is that you’re making a tool for people to use as they see fit,” she says, emphasizing the burden on open-source developers. “And it’s both harder to make that assurance and also less likely that you’ll be able to deal with penalties in the bill because open-source projects are often less funded and less able to spend money on compliance.”
State Sen. Scott Wiener, the bill’s sponsor, told Bloomberg he’s heard industry criticisms and made adjustments to its language to clarify that open-source developers aren’t entirely liable for all the ways their models are adapted, but he stood by the bill’s intentions. “I’m a strong supporter of AI. I’m a strong supporter of open source. I’m not looking in any way to impede that innovation,” Wiener said. “But I think it’s important, as these developments happen, for people to be mindful of safety.” Spokespeople for Wiener did not respond to GZERO’s request for comment.
In the past few months, Utah and Colorado have passed their own AI laws, but they’ve both focused on consumer protection rather than liability for catastrophic results of the technology. California, which houses many of the biggest companies in AI, has broader ambitions. But while California has been able to lead the nation — and the federal government on data privacy — it might need industry support to get its AI bill fully approved in the legislature and signed into law. California’s Senate passed the bill last month, and the Assembly is set to vote on it before the end of August.
California Gov. Gavin Newsom hasn’t signaled whether or not he’ll sign the bill should it pass both houses of the legislature, but in May, he publicly warned against over-regulating AI and ceding America’s advantage to rival nations: “If we over-regulate, if we overindulge, if we chase the shiny object, we could put ourselves in a perilous position.”An AI-generated image of a techno brain crashing against the waves.
You say you want AI revolution?
A year after the launch of ChatGPT, who are the winners and losers, and what's next? Our new columnist Azeem Azhar, founder of Exponential View, and an author and analyst, weighs in.
It’s hard to believe it’s been less than a year since ChatGPT was unveiled by Sam Altman, the boss of OpenAI. Far from the razzmatazz that normally accompanies Silicon Valley launches, Altman posted an innocuous tweet. And the initial responses could be characterized as bemused delight at seeing a new trinket.
But looking back, we can see that ChatGPT was about to unleash a tidal wave of chaos, not merely on the technology industry but the world at large. That chaos has seen the world’s largest technology firms swing their supertankers volte-face.
The industry thrives off having a new technology platform: Crypto is a bust, and the metaverse is still a pipe dream. But today’s AI — the large language models that operate as the brains in ChatGPT — seems like the real deal.
Precarious presumptions
Many of the Big Tech firms, like Alphabet, which initially developed the transformer technologies that underpin the large language model — along with Amazon, Meta, and Apple — underestimated the impact ChatGPT would have. They have since feverishly chased the generative AI train: Alphabet reorganized all its AI talent under Demis Hassabis and rushed out new products, such as Bard; Meta publicly released an impressive range of open-source AI models; Amazon invested $4 billion in OpenAI's competitor, Anthropic; and Apple is readying its own generative tools. Microsoft, meanwhile, had its ducks in a row. The company had built an important strategic deal with OpenAI, brokered by Reid Hoffman, a much-respected Silicon Valley investor who at the time sat on the board of both firms.
Looking for perspective on AI beyond the hype? Subscribe to our free GZERO AI newsletter, the essential weekly read of the AI revolution.
For years, the received wisdom about artificial intelligence was that it would automate many types of white-collar tasks, starting with routine desk work. The research and market forecasts suggested that those of us doing nonroutine cognitive work — lawyers, strategy consultants, policy wonks, readers like you — perform tasks that are too complex for early AI systems. Rather it would be methodical desk work, such as data entry, document review, and customer service, that would be the easiest to automate.
A very different reality
A new study from Harvard Business School and Boston Consulting Group, the white shoe consultants, ixnayed that assumption. They tested nearly 800 consultants, likely graduates of the world’s most selective schools, on typical strategy consulting tasks. Half the group had help from ChatGPT, and the other half worked on their own. The results were stunning. On average, the consultants using ChatGPT completed their work 25.1% faster. And the bottom half of consultants saw the quality of their output increase by 43% — taking their average performance to well above that of the unaided consultant.
This result — matched by other research — throws received wisdom out the window. Even nonroutine work can benefit from AI. And we're not talking about highly advanced AI but rather garden-variety AI people can access on their phones. As a result, employees will be enticed by the productivity gains of using ChatGPT to ignore corporate security policies. The personal win — better quality work, more free time — will be too great for workers. Employers will struggle to rein in this behavior and expose their firms to new potential liabilities.
The road to standardization
At the same time, powerful general technologies do not necessarily work in favor of the employee, as bosses are tempted to substitute capital (machines) for labor (people). Historically, general-purpose technologies have become the sites of political contestation: Think of workers protesting power looms and assembly lines. The dispute is not about the technologies themselves but rather how the gains from the technology are split. It is a fight over power.
The recent screenwriters' strike in Hollywood is just such a battle. In a sense, it is less about the technology and more about the terms on which it is introduced. Similar fights will erupt in different industries and countries in the coming years until new norms emerge. Several artists and writers have filed lawsuits against OpenAI for training its systems in their creative endeavors.
During the Industrial Revolution, the process of normalizing standards took several decades in 18th and 19th century England. The workers’ plight worsened as the gains from automation went to shareholders, giving rise to heart-rending stories Charles Dickens tells. It was likely the success of labor movements that helped wages catch up.
And the tension with workers will be only one fault line. Governments are critical to the process of developing standards and norms, and yet their record of dealing with the impact of technologies in recent decades has been poor. Once the internet went mainstream in the late 1990s, catalyzed by the Clinton-Gore administration, successive American and European governments did little to advance the institutional or regulatory reform this expanded industry needed.
After 9/11, the US government became overly enamored with the surveillance capabilities afforded by the internet and the soft power big American tech firms offered. Washington did little to address the anti-competitive and politically polarizing side effects that allowed tech to morph into Big Tech.
Even late last year, governments were moving in mass but slowly to confront these questions. ChatGPT woke everyone up. Whether in China, the US, the EU, or the UK, figuring out what the institutional guardrails around AI should be has become a belated priority. In the UK, Rishi Sunak is making a late play for global leadership by hosting, this week, an AI Safety Summit with a view toward building a scientifically robust international agency, like the IPCC, to help evaluate, identify, and manage the most worrisome risks posed by AI.
The UN’s Antonio Guterres has announced his own AI advisory body, which may help the Global South develop a voice in how we contend with the beneficial deployment of AI.
Even perfectly designed, which nothing can be, a general-purpose technology will force changes to the rules and behaviors in a society. As I write in my book, the accelerating pace of change means we have a smaller window than normal to turn this chaos into some semblance of order. And that order will require effective national and multilateral governance and institutions that support them. No one quite knows, nor will we know for a while, what “effective” means in this context. Acting too quickly raises more risks: rash regulation, a paucity of deliberation, and, most likely, the exclusion of groups lacking the resources to mount effective lobbying.
If the first year of ChatGPT’s launch was marked by chaos, I doubt, given the accelerating pace of technology, the next year will have less turmoil. But it may, at least, be accompanied by a wider consensus endeavoring to erect some scaffolding from which effective governance, leading to more equitable prosperity, might emerge in the coming years.
Canadian Heritage Minister Pascale St-Onge
Meta faces Canadian watchdog probe
Meta, which owns Facebook and Instagram, started blocking news articles for Canadians in response to a federal law that would force it to share revenue with news outlets. The law won’t take effect for six months, but Meta has reacted aggressively. Rather than comply or consult with the government about next steps, it has moved to just block news links, arguing that people don’t come to the site for that purpose. The federal government has denounced the move.
"We’re going to keep standing our ground,” said Canadian Heritage Minister Pascale St-Onge. “After all, if the government can’t stand up for Canadians against tech giants, who will?”
Both Meta and Alphabet, which owns Google, have said they will block news rather than pay for it, although Google has not responded as aggressively.
Observers think both companies have reason to be uneasy about setting a precedent that governments in other jurisdictions might emulate. But so far, the tech giants appear to be winning the game of chicken. In June, a California link tax bill stalled in the legislature and a similar bill seems to be stuck in Congress.
Canadian media outlets have seen their traffic from Facebook cut — some have seen their page views halved — which has a knock-on effect on potential advertising revenue. But they should not hold their breath hoping to be rescued by the Competition Bureau, according to an analysis by University of Ottawa professor Michael Geist, an expert in e-commerce law, who described the industry’s complaint as “exceptionally weak.”Tech talent wars & the role of ethics in Big Tech success (long-term)
Facebook whistleblower Frances Haugen still has hope that the corporate culture inside tech companies can change for the better.
"Huge things that seemed impossible [...] all came to be," she says, comparing the idea to historical tectonic shifts like the end of the Cold War or apartheid in South Africa.
Speaking to Ian Bremmer on GZERO World, Haugen says that she doesn't want to tear down social media companies. In fact, she wants them to be successful in the long run "because culture change will come along with that."
Google recently had to ditch a lucrative Pentagon contract in order to retain the best talent.
Facebook, on the other hand, is at a huge disadvantage because it struggles to recruit top talent because of their negative image in the industry.