Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Joachim von Braun, president of the Pontifical Academy of Sciences, speaks at the “Risks and Opportunities of AI for Children: A Common Commitment for Safeguarding Children” event.
The Vatican wants to protect children from AI dangers
In a conference at the Vatican last week, Catholic leaders called for global action to protect children from the dangers of artificial intelligence.
“We are really currently in a war at two frontiers when it comes to protecting children — the old ugly child exploitation, one-on-one, is not overcome — and now we have the new AI, gender-based violence at scale and sophistication,” Joachim Von Braun, president of the Vatican’s Pontifical Academy of Sciences, told the press on Thursday.
The conference, which ran from Thursday to Saturday, brought together Catholic officials as well as tech experts, world leaders, and child protection advocates. Attendees discussed AI’s protection to detect online threats and expand education but also risks for abuse such as deepfakes and algorithmic bias.
The Vatican under Pope Francis has been particularly interested in AI with the pontiff appointing an AI advisor in 2024, and it recently warned of “profound risks” of the technology in January.Inside the fight to shape Trump’s AI policy
The Trump White House has received thousands of recommendations for its upcoming AI Action Plan, a roadmap that will define how the US government will approach artificial intelligence for the remainder of the administration.
The plan was first mandated by President Donald Trump in his January executive order that scrapped the AI rules of his predecessor, Joe Biden. While Silicon Valley tech giants have put forth their plans for industry-friendly regulation and deregulation, many civil society groups have taken the opportunity to warn of the dangers of AI. Ahead of the March 15 deadline set by the White House to answer a request for information, Google and OpenAI were some of the biggest names to propose measures they’d like to see in place at the federal level.
What Silicon Valley wants
OpenAI urged the federal government to allow AI companies to train their models’ copyrighted material without restriction, shield them from state-level regulations, and implement additional export controls against Chinese competitors.
“While America maintains a lead on AI today, DeepSeek shows that our lead is not wide and is narrowing. The AI Action Plan should ensure that American-led AI prevails over CCP-led AI, securing both American leadership on AI and a brighter future for all Americans,” OpenAI’s head of global policy, Christopher Lehane, wrote in a memo. Google meanwhile called for weakened copyright restrictions on training AI and “balanced” export controls that would protect national security without strangling American companies.
Xiaomeng Lu, the director of geo-technology at the Eurasia Group, said invoking Chinese AI models was a “competitive play” from OpenAI.
“OpenAI is threatened by DeepSeek and other open-source models that put pressure on the company to lower prices and innovate better,” she said. “Sam [Altman] likely wants the US government’s aid in wider access to data, export restrictions, and government procurement to boost its own market position.”
Laura Caroli, a senior fellow of the Wadhwani AI Center at the Center for Strategic and International Studies, agreed. “Despite DeepSeek’s problems in safety and privacy, the real point is … OpenAI feels threatened by DeepSeek’s ability to build powerful open-source models at lower costs,” she said. “They use the national security narrative to advance their commercial goals.”
Civil liberties and national security concerns
Civil liberties groups painted a more dire picture of what could happen if Trump pursues an AI strategy that does not attempt to place guardrails on the development of this technology.
“Automating important decisions about people is reckless and dangerous,” said Corynne McSherry, legal director at the Electronic Frontier Foundation. The group submitted its own response to the government on March 13. McSherry told GZERO it criticized tech companies for ignoring “serious and well-documented risks of using AI tools for consequential decisions about housing, employment, immigration, access to benefits” and more.
There are also important national security measures that might be ignored by the Trump administration if it removes all regulations governing AI.
“I agree that maintaining US leadership in AI is a national security imperative,” said Cole McFaul, research analyst at Georgetown University's Center for Security and Emerging Technology, which also submitted a response that focused on securing American leadership in AI while mitigating risks and better competing with China. “OpenAI’s RFI response includes a call to ban the use of PRC-trained models. I agree with a lot of what they proposed, but I worry that some of Washington’s most influential AI policy advocates are also those with the most to gain.”
But even with corporate influence in Washington, it’s a confusing time to try to navigate the AI landscape with so many nascent regulations in Europe, plus changing signals from the White House.
Mia Rendar, an attorney at the law firm Pillsbury Winthrop Shaw Pittman, noted that while the government is figuring out how to regulate this emerging technology, businesses are caught in the middle. “We’re at a similar inflection point that we were when GDPR was being put in place,” Rendar said, referring to the European privacy law. “If you’re a multinational company, AI laws are going to follow a similar model – you’ll need to set and maintain standards that meet the most stringent set of obligations.”
How influential is Silicon Valley?
With close allies like Tesla CEO Elon Musk and investor David Sacks in Trump’s orbit, the tech sector’s influence has been hard to ignore. Thus, the final AI Action Plan, expected in July, will show whether Silicon Valley really has pull with the Trump administration — and, specifically, which firms have what kind of sway.
While the administration has already signaled that it will be hands-off in regulating AI, it’s unclear what path Trump will take in helping American-made AI companies, sticking it to China, and signaling to the rest of the world that the United States is, in fact, the global leader on AI.
A person in a doctor's gown with a green stethoscope and a phone.
Hard Numbers: Meet Dr. Llama, Surveillance capitalism, California’s dreaming of regulation, South Korea’s declining chip sales to China, Clocking out
70: Open-source AI models performed just as well — or better — than proprietary models at solving complex medical problems, according to a new study by Harvard researchers published on Friday. Notably, Meta’s Llama model correctly diagnosed patients 70% of the time as opposed to OpenAI’s GPT-4, which did so only 64% of the time. This signals that the gap between open- and closed-source models, with the former being largely free to use and customizable, is closing.
7.5 billion: The surveillance company Flock Safety raised $275 million in a new funding round Thursday. The round was led by Bedrock Capital that values Flock, which uses computer vision — a type of artificial intelligence — at $7.5 billion. The firm not only sells to private businesses, but also police departments, sparking concerns from civil liberties advocates such as the ACLU.
30: California’s state legislature is currently considering 30 bills about AI. The proposals are varied: one requires human drivers in autonomous vehicles while another mandates comprehensive safety testing. The state legislature could try to rein in the largely California-based technology sector just as the Trump administration rolls back Biden-era guardrails on AI.
31.8: South Korean chip sales to China dropped 31.8% in February, the second straight month of plummeting sales to the country, according to South Korea’s Ministry of Trade, Industry and Energy. This dropoff among Korean companies such as Samsung and SK Hynix could be attributable to new US export controls initiated at the end of the Biden administration, which restrict the sales of advanced semiconductors — even those using US parts — to China.
25: AI systems are very, very bad at reading clocks, according to new research from the University of Edinburgh, which tested models’ understanding of visual inputs. The AI systems tested correctly read analog clocks less than 25% of the time.
An old fashioned globe showing Europe.
Europe’s biggest companies want to “Buy European”
The coalition emphasized the need for technological sovereignty and independence across different layers of critical infrastructure but specifically highlighted artificial intelligence frameworks and models as key areas. They emphasized the need to create a “EuroStack,” a Europe-led “digital supply chain," an idea proposed by economist Cristina Caffarra in January, to compete with US tech giants in Silicon Valley.
The group also recommended establishing a sovereign investment fund to pour money into quantum computing, chips, and cloud technology, as well as a “Buy European” effort across the continent. Europe has been the first mover in AI regulation, passing its AI Act last year, but its companies have grown frustrated with the lack of commercial activity and autonomy. Further, they stressed that without new investment, Europe will fall further behind the US and could be geopolitically vulnerable in the long term.
A typewriter and a white sheet of paper with the words "Artificial Intelligence" printed on it.
Beijing calls for labeling of generative AI
The rules, which were announced Friday and will go into effect on Sept. 1, mandate that any generative AI has to either explicitly signal that it was produced by AI — such as through a watermark — or it needs to encode that information in its metadata.
“The Labeling Law will help users identify disinformation and hold service suppliers responsible for labeling their content,” the Cyberspace Administration of China wrote in a statement, translated by Bloomberg. “This is to reduce the abuse of AI-generated content.”
It’s unclear how Chinese companies will comply. Critics of watermarking requirements in the US have warned that watermarks are easily removed or manipulated. The relationship between Beijing and China’s tech sector is always a push and pull — it’s unclear whether the government will be cheery about its thriving private actors for long, or institute additional rules like this one to rein them in and reassert dominance.Capitol Hill, Washington, D.C.
Silicon Valley and Washington push back against Europe
That display came after Meta and Google publicly criticized Europe’s new code of practice for general AI models, part of the EU’s AI Act earlier this month. Meta’s Joel Kaplan said that the rules impose “unworkable and technically infeasible requirements” on developers, while Google’s Kent Walker called them a “step in the wrong direction.”
On Feb. 11, US Vice President JD Vance told attendees at the AI Action Summit in Paris, France, that Europe should pursue regulations that don’t “strangle” the AI industry.
The overseas criticism from Washington and Silicon Valley may be having an impact. The European Commission recently withdrew its planned AI Liability Directive, designed to make tech companies pay for the harm caused by their AI systems. European official Henna Virkkunen said that the Commission is softening its rules not because of pressure from US officials, but rather to spur innovation and investment in Europe.
But these days, Washington and Silicon Valley are often speaking with the same voice.
France puts the AI in laissez-faire
France positioned itself as a global leader in artificial intelligence at last week’s AI Action Summit in Paris, but the gathering revealed a country more focused on attracting investment than leading Europe's approach to artificial intelligence regulation.
The summit, which drew global leaders and technology executives from around the world on Feb. 10-11, showcased France’s shift away from Europe’s traditionally strict tech regulation. French President Emmanuel Macron announced $113 billion in domestic AI investment while calling for simpler rules and faster development — a stark contrast to the EU’s landmark AI Act, which is gradually taking effect across the continent.
Esprit d’innovation
This pivot toward a business-friendly approach has been building since late 2023, when France tried unsuccessfully to water down provisions in the EU’s AI Act to help domestic firms like Mistral AI, the $6 billion Paris-based startup behind the chatbot Le Chat.
“France sees an opportunity to improve its sluggish economy via the development and promotion of domestic AI services and products,” said Mark Scott, senior fellow at the Atlantic Council’s Digital Forensic Research Lab. “Where France does stand apart from others is its lip service to the need for some AI rules, but only in ways that, inevitably, support French companies to compete on the global stage.”
Nuclear power play
France does have unique advantages in its AI: plentiful nuclear power, tons of foreign investment, and established research centers from Silicon Valley tech giants Alphabet and Meta. The country plans to dedicate up to 10 gigawatts of nuclear power to a domestic AI computing facility by 2030 and struck deals this month with both the United Arab Emirates and the Canadian energy company Brookfield.
About 70% of France’s electricity comes from nuclear — a clean energy source that’s become critical to the long-term vision of AI companies like Amazon, Google, and Microsoft.
France vs. the EU
But critics say France’s self-promotion undermines broader European efforts. “While the previous European Commission focused on oversight and regulation, the new cohort appears to follow an entirely different strategy,” said Mia Hoffman, a research fellow at Georgetown University’s Center for Security and Emerging Technology. She warned that EU leaders under the second Ursula von der Leyen-led Commission, which began in September 2024, are “buying into the regulation vs. innovation narrative that dominates technology policy debates in the US.”
The summit itself reflected these tensions. “It looked more like a self-promotion campaign by France to attract talent, infrastructure, and investments, rather than a high-level international summit,” said Jessica Galissaire of the French think tank Renaissance Numérique. She argued that AI leadership “should be an objective for the EU and not member states taken individually.”
This France-first approach marks a significant departure from a more united European tech policy, suggesting France may be more interested in competing with the US and China as a player on the world stage than in strengthening Europe’s collective position in AI development.
DeepSeek logo seen on a cell phone.
First US DeepSeek ban could be on the horizon
Lawmakers in the US House of Representatives want to ban DeepSeek’s AI models from federal devices.
Reps. Josh Gottheimer and Darin LaHood, a Democrat from New Jersey and a Republican from Illinois, respectively, introduced a bill on Thursday called the “No DeepSeek on Government Devices Act.” It would work similarly to the ban of TikTok on federal devices, which was signed into law by President Joe Biden in December 2022. Both bans apply to all government-owned electronics, including phones and computers.
DeepSeek’s R1 large language model is a powerful alternative to the top models from Anthropic, Google, Meta, and OpenAI — the first Chinese model to take the AI world by storm. But its privacy policy indicates that it can send user data to China Mobile, a Chinese state-owned telecom company that’s sanctioned in the US.
Since DeepSeek shot to fame in January, Australia and Taiwan have blocked access on government devices; Italy has banned it nationwide for citizens on privacy grounds. Congress may go further and try to ban DeepSeek in the United States, but so far no members have proposed doing that.