Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Capitol Hill, Washington, D.C.
Silicon Valley and Washington push back against Europe
That display came after Meta and Google publicly criticized Europe’s new code of practice for general AI models, part of the EU’s AI Act earlier this month. Meta’s Joel Kaplan said that the rules impose “unworkable and technically infeasible requirements” on developers, while Google’s Kent Walker called them a “step in the wrong direction.”
On Feb. 11, US Vice President JD Vance told attendees at the AI Action Summit in Paris, France, that Europe should pursue regulations that don’t “strangle” the AI industry.
The overseas criticism from Washington and Silicon Valley may be having an impact. The European Commission recently withdrew its planned AI Liability Directive, designed to make tech companies pay for the harm caused by their AI systems. European official Henna Virkkunen said that the Commission is softening its rules not because of pressure from US officials, but rather to spur innovation and investment in Europe.
But these days, Washington and Silicon Valley are often speaking with the same voice.
France puts the AI in laissez-faire
France positioned itself as a global leader in artificial intelligence at last week’s AI Action Summit in Paris, but the gathering revealed a country more focused on attracting investment than leading Europe's approach to artificial intelligence regulation.
The summit, which drew global leaders and technology executives from around the world on Feb. 10-11, showcased France’s shift away from Europe’s traditionally strict tech regulation. French President Emmanuel Macron announced $113 billion in domestic AI investment while calling for simpler rules and faster development — a stark contrast to the EU’s landmark AI Act, which is gradually taking effect across the continent.
Esprit d’innovation
This pivot toward a business-friendly approach has been building since late 2023, when France tried unsuccessfully to water down provisions in the EU’s AI Act to help domestic firms like Mistral AI, the $6 billion Paris-based startup behind the chatbot Le Chat.
“France sees an opportunity to improve its sluggish economy via the development and promotion of domestic AI services and products,” said Mark Scott, senior fellow at the Atlantic Council’s Digital Forensic Research Lab. “Where France does stand apart from others is its lip service to the need for some AI rules, but only in ways that, inevitably, support French companies to compete on the global stage.”
Nuclear power play
France does have unique advantages in its AI: plentiful nuclear power, tons of foreign investment, and established research centers from Silicon Valley tech giants Alphabet and Meta. The country plans to dedicate up to 10 gigawatts of nuclear power to a domestic AI computing facility by 2030 and struck deals this month with both the United Arab Emirates and the Canadian energy company Brookfield.
About 70% of France’s electricity comes from nuclear — a clean energy source that’s become critical to the long-term vision of AI companies like Amazon, Google, and Microsoft.
France vs. the EU
But critics say France’s self-promotion undermines broader European efforts. “While the previous European Commission focused on oversight and regulation, the new cohort appears to follow an entirely different strategy,” said Mia Hoffman, a research fellow at Georgetown University’s Center for Security and Emerging Technology. She warned that EU leaders under the second Ursula von der Leyen-led Commission, which began in September 2024, are “buying into the regulation vs. innovation narrative that dominates technology policy debates in the US.”
The summit itself reflected these tensions. “It looked more like a self-promotion campaign by France to attract talent, infrastructure, and investments, rather than a high-level international summit,” said Jessica Galissaire of the French think tank Renaissance Numérique. She argued that AI leadership “should be an objective for the EU and not member states taken individually.”
This France-first approach marks a significant departure from a more united European tech policy, suggesting France may be more interested in competing with the US and China as a player on the world stage than in strengthening Europe’s collective position in AI development.
DeepSeek logo seen on a cell phone.
First US DeepSeek ban could be on the horizon
Lawmakers in the US House of Representatives want to ban DeepSeek’s AI models from federal devices.
Reps. Josh Gottheimer and Darin LaHood, a Democrat from New Jersey and a Republican from Illinois, respectively, introduced a bill on Thursday called the “No DeepSeek on Government Devices Act.” It would work similarly to the ban of TikTok on federal devices, which was signed into law by President Joe Biden in December 2022. Both bans apply to all government-owned electronics, including phones and computers.
DeepSeek’s R1 large language model is a powerful alternative to the top models from Anthropic, Google, Meta, and OpenAI — the first Chinese model to take the AI world by storm. But its privacy policy indicates that it can send user data to China Mobile, a Chinese state-owned telecom company that’s sanctioned in the US.
Since DeepSeek shot to fame in January, Australia and Taiwan have blocked access on government devices; Italy has banned it nationwide for citizens on privacy grounds. Congress may go further and try to ban DeepSeek in the United States, but so far no members have proposed doing that.
US Vice President JD Vance delivers a speech during the plenary session of the Artificial Intelligence Action Summit at the Grand Palais in Paris, France, on Feb. 11, 2025.
JD Vance preaches innovation above all
Speaking at the AI Action Summit in Paris, France, US Vice President JD Vance on Tuesday laid out a vision of technological innovation above all — especially above regulation or international accords.
“I’m not here this morning to talk about AI safety, which was the title of the conference a couple of years ago. I’m here to talk about AI opportunity,” Vance said. “We believe that excessive regulation of the AI sector could kill a transformative industry.” The vice president told a group of heads of state that the regulations that the European Union has placed on tech, including the Digital Services Act and AI Act, have been onerous.
Additionally, the US and UK did not sign onto a new international agreement put forward at the summit — which China, India, and France agreed to. The accord lays out norms for AI safety and sustainable energy use.
Europe already achieved first-mover status in regulating artificial intelligence software, largely a Silicon Valley export. But the Trump administration has signaled that the gap between America’s hands-off approach to AI and Europe’s hands-on attempt to rein it in will only widen in the coming years.
President-elect Donald Trump points his finger at the Palm Beach County Convention Center on Nov. 6, 2024.
Trump wants a White House AI czar
If appointed, this person would be the White House official tasked with coordinating the federal government’s use of the emerging technology and its policies toward it. And while the role will not go to Elon Musk, the billionaire tech CEO who has been named to run a government efficiency commission for Trump, he will have input as to who gets the job.
The Trump administration has promised a deregulatory attitude toward artificial intelligence, including undoing President Joe Biden’s 2023 executive order on AI.
That order not only tasked the federal departments and agencies with evaluating how to regulate the technology given their statutory authority but also how to use it to further their own goals. Under Biden, each agency was tasked with naming a chief AI officer. If Trump is to keep those positions, the White House AI czar would likely coordinate with these officials across the executive branch.Will Donald Trump let AI companies run wild?
Days are numbered for Biden’s executive order
Trump hasn’t given many details about how exactly he’ll rejigger the regulatory approach to AI, but he has promised to repeal President Joe Biden’s executive order on AI, which tasked every executive department and agency with developing common-sense rules to rein in AI while also exploring how they can use the technology to further their work. At a December 2023 campaign rally in Iowa, Trump promised to “cancel” the executive order and “ban the use of AI to censor the speech of American citizens on day one.” (It’s unclear what exactly Trump was referring to, but AI has long been used by social media companies for content moderation.)
The states will be in charge of regulating AI
Megan Shahi, director of technology policy at the Center for American Progress, a liberal think tank, said that a deregulatory approach by the Trump administration will cause a patchwork system that’ll be difficult for AI companies to comply with.
“This can be beneficial for some Americans living in states willing to pass regulation, but harmful for others without it,” she said. “The hope is that states set a national standard that AI companies seek to universally comply with, but that is unlikely to be a reality right away at least.”
While Trump himself is likely to be hands-off, she expects him to “entrust a team of his trusted allies”— such as Tesla and X CEO Elon Musk — “to do much of the agenda setting, decision making, and execution of the tech agenda.”
Will Trump reverse Biden’s chip crackdown?
Matt Mittelsteadt, a research fellow at the libertarian Mercatus Center at George Mason University, said he expects export controls on chips aimed at curbing China’s ability to compete on AI to continue. And while he thinks it’s a harmful idea, he believes a Republican unified government could enact controls on AI software — especially following reports that China used Meta’s open-source Llama models for military purposes.
The biggest change is Trump’s proposed tariffs on China. “For AI, the use of tariffs to either attempt to ‘punish China’ or reshore industry could be an industry killer,” Mittelsteadt said. “AI hardware depends on materials either not found or manufactured in the United States and no amount of trade protection will ‘reshore’ what cannot be reshored. The only possible result here will be financial strain that is bound to tighten the belts of Silicon Valley and yield a resulting decrease in research and development spend.”
This could give China a strategic advantage: “At this critical moment in the ‘AI race’ with China, such restrictions could represent a generational leapfrog opportunity for China’s tech sector.”
In the coming weeks, Trump will announce his Cabinet selections — the earliest indication of how he’ll handle AI and a litany of other crucial policy areas. Personnel is policy, after all. How quickly he can get them confirmed will impact how quickly he can unwind Biden’s orders and chart a new path, especially with a first 100 days agenda that’s likely to be jam-packed. Will AI make the cut or fall by the wayside? Trump hasn’t even been sworn in yet, but the clock is already ticking.
FILE PHOTO: California Governor Gavin Newsom (D) reacts as he speaks to the members of the press on the day of the first presidential debate hosted by CNN in Atlanta, Georgia, U.S., June 27, 2024.
Gov. Gavin Newsom vetoes California’s AI safety bill
California Gov. Gavin Newsom on Sunday vetoed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, the AI safety bill passed by the state’s legislature in August.
Newsom has signed other AI-related bills into law, such as two recent measures protecting performers from AI deepfakes of their likenesses, but vetoed this one over concerns about the focus of the would-be law.
“By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology,” Newsom wrote in a letter on Sept. 29. “Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”
Democratic state Sen. Scott Wiener, who sponsored the bill, called the veto a “setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet.” Wiener hasn’t disclosed the next steps but vowed to continue pushing the envelope on AI regulation in the state. “California will continue to lead in that conversation — we are not going anywhere.”Commerce Secretary Gina Raimondo arrives to a Senate Appropriations Subcommittee on Commerce, Justice, Science, and Related Agencies hearing on expanding broadband access on Capitol Hill in Washington, D.C., U.S. February 1, 2022.
National safety institutes — assemble!
The Biden administration announced that it will host a global safety summit on artificial intelligence on Nov. 20-21 in San Francisco. The International Network of AI Safety Institutes, which was formed at the AI Safety Summit in Seoul in May, will bring together safety experts from each member country’s AI safety institute. The current member countries are Australia, Canada, the European Union, France, Japan, Kenya, Singapore, South Korea, the United Kingdom, and the United States.
The aim? “Strengthening international collaboration on AI safety is critical to harnessing AI technology to solve the world’s greatest challenges,” Secretary of State Antony Blinken said in a statement.
Commerce Secretary Gina Raimondo, co-hosting the event with Blinken, said that the US is committed to “pulling every lever” on AI regulation. “That includes close, thoughtful coordination with our allies and like-minded partners.”