Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
Science & Tech
A judge's gavel on a wooden table
Apple faces a federal class-action lawsuit alleging false advertising of AI features that haven’t yet materialized. Filed on Wednesday in the federal district court in San Jose, California, the suit claims Apple misled consumers by heavily promoting Apple Intelligence capabilities in iPhone marketing that weren’t yet fully functional, including an AI-enhanced Siri assistant. Bloomberg reported that when Apple began promoting its Apple Intelligence suite in the fall of 2024, the technology was merely a “barely working prototype.”
The legal challenge came the day before a significant executive shakeup at Apple. On Thursday, the company removed its digital assistant Siri from AI chief John Giannandrea’s purview and reassigned it to Mike Rockwell, creator of the Vision Pro mixed-reality headset. The restructuring also follows Apple’s announcement earlier this month that planned updates to Siri are delayed until 2026 due to development difficulties.
Meanwhile, Apple continues developing new future AI features, including an ongoing project aimed at equipping Apple Watches with cameras that could provide visual intelligence features to analyze users’ surroundings. Ultimately, the company is betting on Rockwell’s technical expertise and its own hardware footprint to turn around its struggling AI efforts and catch up with competitors.Perplexity AI apps on a smartphone and a computer screen.
26 billion: CoreWeave, which is expected to start trading next Friday on the Nasdaq stock exchange, updated its prospectus on Thursday to disclose that it’s targeting up to a $26 billion valuation from its initial public offering. The Nvidia-backed company is a New Jersey-based cloud computing company that specializes in offering infrastructure to AI developers.
85: OpenAI and Meta are seeking partnerships with India’s Reliance Industries to expand their AI presence in the subcontinent, according to a report in The Information published Saturday. OpenAI, in particular, has floated the idea of distributing ChatGPT through Reliance’s wireless carrier, Jio, and even cutting subscription prices up to 85% for Indian customers.
10: Researchers have developed an AI weather prediction system called “Aardvark Weather,” which operates thousands of times more efficiently than conventional forecasting methods. This breakthrough from the University of Cambridge, Alan Turing Institute, Microsoft Research, and ECMWF can run on a desktop computer instead of supercomputers and uses just 10% of the input data that existing systems need. Aardvark is currently a research model and not yet available for public use.
50 million: Billionaire Reed Hastings, co-founder of Netflix, announced a donation to his alma mater, Bowdoin College, to the tune of $50 million on Monday. It’s a large gift for the small liberal arts college in Maine — the largest since its founding in 1794, according to the New York Times. Hastings said he wants Bowdoin to use the money to become a leader in studying the risks of AI and ethical questions associated with the technology.
Joachim von Braun, president of the Pontifical Academy of Sciences, speaks at the “Risks and Opportunities of AI for Children: A Common Commitment for Safeguarding Children” event.
In a conference at the Vatican last week, Catholic leaders called for global action to protect children from the dangers of artificial intelligence.
“We are really currently in a war at two frontiers when it comes to protecting children — the old ugly child exploitation, one-on-one, is not overcome — and now we have the new AI, gender-based violence at scale and sophistication,” Joachim Von Braun, president of the Vatican’s Pontifical Academy of Sciences, told the press on Thursday.
The conference, which ran from Thursday to Saturday, brought together Catholic officials as well as tech experts, world leaders, and child protection advocates. Attendees discussed AI’s protection to detect online threats and expand education but also risks for abuse such as deepfakes and algorithmic bias.
The Vatican under Pope Francis has been particularly interested in AI with the pontiff appointing an AI advisor in 2024, and it recently warned of “profound risks” of the technology in January.Semiconductor chips are seen on a circuit board of a computer in this illustration.
A coalition of nine European countries is discussing how to accelerate the continent’s chip independence, the group said on Friday.
France, Germany, Italy, the Netherlands, and Spain are involved in the discussions, which are plotting a second Chips Act, according to Dutch Economy Minister Dirk Beljaarts. The first European Chips Act went into effect in 2023, though Reuters notes it has so far “failed to meet key goals” to stimulate the European chip market. On Wednesday, the European Semiconductor Industry Association and SEMI Europe, both industry trade groups, publicly called for a new Chips Act.
The new initiative could target specific gaps in Europe’s industrial capacity. Europe has a grasp on research and development, and semiconductor equipment (such as the Dutch lithography powerhouse ASML) but needs to invest more in chip packaging and production, Beljaarts said. In September, Intel delayed plans to build a factory in Germany by at least two years. The coalition is planning to present its proposals to the broader European community this summer.The Trump White House has received thousands of recommendations for its upcoming AI Action Plan, a roadmap that will define how the US government will approach artificial intelligence for the remainder of the administration.
The plan was first mandated by President Donald Trump in his January executive order that scrapped the AI rules of his predecessor, Joe Biden. While Silicon Valley tech giants have put forth their plans for industry-friendly regulation and deregulation, many civil society groups have taken the opportunity to warn of the dangers of AI. Ahead of the March 15 deadline set by the White House to answer a request for information, Google and OpenAI were some of the biggest names to propose measures they’d like to see in place at the federal level.
What Silicon Valley wants
OpenAI urged the federal government to allow AI companies to train their models’ copyrighted material without restriction, shield them from state-level regulations, and implement additional export controls against Chinese competitors.
“While America maintains a lead on AI today, DeepSeek shows that our lead is not wide and is narrowing. The AI Action Plan should ensure that American-led AI prevails over CCP-led AI, securing both American leadership on AI and a brighter future for all Americans,” OpenAI’s head of global policy, Christopher Lehane, wrote in a memo. Google meanwhile called for weakened copyright restrictions on training AI and “balanced” export controls that would protect national security without strangling American companies.
Xiaomeng Lu, the director of geo-technology at the Eurasia Group, said invoking Chinese AI models was a “competitive play” from OpenAI.
“OpenAI is threatened by DeepSeek and other open-source models that put pressure on the company to lower prices and innovate better,” she said. “Sam [Altman] likely wants the US government’s aid in wider access to data, export restrictions, and government procurement to boost its own market position.”
Laura Caroli, a senior fellow of the Wadhwani AI Center at the Center for Strategic and International Studies, agreed. “Despite DeepSeek’s problems in safety and privacy, the real point is … OpenAI feels threatened by DeepSeek’s ability to build powerful open-source models at lower costs,” she said. “They use the national security narrative to advance their commercial goals.”
Civil liberties and national security concerns
Civil liberties groups painted a more dire picture of what could happen if Trump pursues an AI strategy that does not attempt to place guardrails on the development of this technology.
“Automating important decisions about people is reckless and dangerous,” said Corynne McSherry, legal director at the Electronic Frontier Foundation. The group submitted its own response to the government on March 13. McSherry told GZERO it criticized tech companies for ignoring “serious and well-documented risks of using AI tools for consequential decisions about housing, employment, immigration, access to benefits” and more.
There are also important national security measures that might be ignored by the Trump administration if it removes all regulations governing AI.
“I agree that maintaining US leadership in AI is a national security imperative,” said Cole McFaul, research analyst at Georgetown University's Center for Security and Emerging Technology, which also submitted a response that focused on securing American leadership in AI while mitigating risks and better competing with China. “OpenAI’s RFI response includes a call to ban the use of PRC-trained models. I agree with a lot of what they proposed, but I worry that some of Washington’s most influential AI policy advocates are also those with the most to gain.”
But even with corporate influence in Washington, it’s a confusing time to try to navigate the AI landscape with so many nascent regulations in Europe, plus changing signals from the White House.
Mia Rendar, an attorney at the law firm Pillsbury Winthrop Shaw Pittman, noted that while the government is figuring out how to regulate this emerging technology, businesses are caught in the middle. “We’re at a similar inflection point that we were when GDPR was being put in place,” Rendar said, referring to the European privacy law. “If you’re a multinational company, AI laws are going to follow a similar model – you’ll need to set and maintain standards that meet the most stringent set of obligations.”
How influential is Silicon Valley?
With close allies like Tesla CEO Elon Musk and investor David Sacks in Trump’s orbit, the tech sector’s influence has been hard to ignore. Thus, the final AI Action Plan, expected in July, will show whether Silicon Valley really has pull with the Trump administration — and, specifically, which firms have what kind of sway.
While the administration has already signaled that it will be hands-off in regulating AI, it’s unclear what path Trump will take in helping American-made AI companies, sticking it to China, and signaling to the rest of the world that the United States is, in fact, the global leader on AI.
A person in a doctor's gown with a green stethoscope and a phone.
70: Open-source AI models performed just as well — or better — than proprietary models at solving complex medical problems, according to a new study by Harvard researchers published on Friday. Notably, Meta’s Llama model correctly diagnosed patients 70% of the time as opposed to OpenAI’s GPT-4, which did so only 64% of the time. This signals that the gap between open- and closed-source models, with the former being largely free to use and customizable, is closing.
7.5 billion: The surveillance company Flock Safety raised $275 million in a new funding round Thursday. The round was led by Bedrock Capital that values Flock, which uses computer vision — a type of artificial intelligence — at $7.5 billion. The firm not only sells to private businesses, but also police departments, sparking concerns from civil liberties advocates such as the ACLU.
30: California’s state legislature is currently considering 30 bills about AI. The proposals are varied: one requires human drivers in autonomous vehicles while another mandates comprehensive safety testing. The state legislature could try to rein in the largely California-based technology sector just as the Trump administration rolls back Biden-era guardrails on AI.
31.8: South Korean chip sales to China dropped 31.8% in February, the second straight month of plummeting sales to the country, according to South Korea’s Ministry of Trade, Industry and Energy. This dropoff among Korean companies such as Samsung and SK Hynix could be attributable to new US export controls initiated at the end of the Biden administration, which restrict the sales of advanced semiconductors — even those using US parts — to China.
25: AI systems are very, very bad at reading clocks, according to new research from the University of Edinburgh, which tested models’ understanding of visual inputs. The AI systems tested correctly read analog clocks less than 25% of the time.
Masked person standing near LED sign.
According to anew report, criminals are using AI to create scams in multiple languages, produce realistic impersonations to aid blackmail, and generate child sexual abuse material. Europol recently cracked down on the latter,coordinating the arrests of 24 people across 19 countries for violating national laws against deepfake child pornography.
“The very DNA of organized crime is changing rapidly, adapting to a world in flux,” wrote Europol executive director Catherine De Bolle. “These innovations [in AI] expand the speed, scale, and sophistication of organized crime, creating an even more complex and rapidly evolving threat landscape for law enforcement.” The report also warns that in the future autonomous AIs could even control criminal networks without human guidance.