Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
Science & Tech
Musk’s Department of Government Efficiency, aka DOGE, has sought massive cuts to the federal workforce, in particular targeting USAID, the Department of Education, and the Consumer Financial Protection Bureau, among other agencies.
But Musk isn’t just seizing control of the executive branch; he’s using artificial intelligence as his weapon of choice.
At the Education Department, DOGE representatives have reportedly fed sensitive data, including personally identifiable student loan information, into AI software through Microsoft’s Azure cloud service. A group of students from the University of California sued DOGE in federal court on Friday for allegedly violating federal privacy rules and exceeding their statutory authority. Additionally, congressional Democrats have demanded answers about allegations of a private server used at the Office of Personnel Management; federal workers have sued to stop this, while OPM officials deny it violates the law. And a federal judge on Saturday temporarily halted DOGE access to taxpayer information at the Treasury Department because, the judge wrote, it risks disclosure of “sensitive and confidential information and the heightened risk that the systems in question will be more vulnerable than before to hacking.”
At the General Services Administration, a former Tesla engineer is pushing an “AI-first strategy” that involves building a custom chatbot called GSAi to help draft memos faster and adopting an AI coding agent such as the popular assistant Cursor to assist with software development.
Privacy and security advocates warn that the integration of AI software into the federal government could create significant risks — especially if not done carefully. “Using AI to cut spending or reform government operations is dangerous,” said Kit Walsh, the Electronic Frontier Foundation’s AI director. “AI isn’t magic; it is generated using data collected by humans and often categorized by humans. Then it provides a way to quickly (and often sloppily) try to reproduce the patterns and categories that have been given to it.”
Calli Schroeder, senior counsel at the Electronic Privacy Information Center, said there’s also the risk that AI gobbles up sensitive data and helps train its model on it. “Many AI systems use input data to expand their training datasets in addition to using it to generate a prompt response,” she said. “This not only means security risk if the raw training data is exposed, but also puts the data at risk for further misuse.”
Schroeder noted that these revelations raised fundamental questions about government security protocols if DOGE is indeed using unsecured systems. “Any halfway responsible business or organization has many security procedures and policies about what products you can and cannot connect with company devices,” she said. “It appears that our government either does not meet this incredibly basic level of responsibility and good practice, or no one is enforcing existing policies or procedures.”
The Education Department claims that there’s nothing to worry about with regard to DOGE staff overhauling the department’s systems. “They have been sworn in, have the necessary background checks and clearances, and are focused on making the Department more cost-efficient, effective, and accountable to the taxpayers,” a spokesperson said in a statement to the press. “There is nothing inappropriate or nefarious going on.”
But a lack of transparency has pervaded the entire Musk takeover without comprehensive congressional oversight and with DOGE staffers at times refusing to even give their names while interrogating civil servants. It’s wholly unclear what’s going on mere weeks into the administration with major changes at multiple government departments and agencies — all seemingly with an element of AI. “We deserve lawful, transparent, and accountable decisions in government operations,” Walsh said. “It’s difficult to imagine that the technology at work here is fit for the purpose of making spending and personnel decisions — and Americans deserve better than to have to guess at how those decisions are being made.”DeepSeek logo seen on a cell phone.
Lawmakers in the US House of Representatives want to ban DeepSeek’s AI models from federal devices.
Reps. Josh Gottheimer and Darin LaHood, a Democrat from New Jersey and a Republican from Illinois, respectively, introduced a bill on Thursday called the “No DeepSeek on Government Devices Act.” It would work similarly to the ban of TikTok on federal devices, which was signed into law by President Joe Biden in December 2022. Both bans apply to all government-owned electronics, including phones and computers.
DeepSeek’s R1 large language model is a powerful alternative to the top models from Anthropic, Google, Meta, and OpenAI — the first Chinese model to take the AI world by storm. But its privacy policy indicates that it can send user data to China Mobile, a Chinese state-owned telecom company that’s sanctioned in the US.
Since DeepSeek shot to fame in January, Australia and Taiwan have blocked access on government devices; Italy has banned it nationwide for citizens on privacy grounds. Congress may go further and try to ban DeepSeek in the United States, but so far no members have proposed doing that.
French President Emmanuel Macron delivers a speech during the plenary session of the Artificial Intelligence Action Summit at the Grand Palais in Paris, France, on Feb. 11, 2025.
France has real AI ambitions — and nuclear energy might be the key to unlocking them. Ahead of the AI Action Summit, which kicked off on Monday at the Grand Palais in Paris, the French government announced $113 billion in new investments in artificial intelligence at the summit, investments that will be powered by 1 gigawatt of dedicated nuclear power.
The initiative, spearheaded by the British data center company FluidStack, will begin construction in the third quarter of 2025. It seeks to achieve a similar scale to Stargate, the US government-backed project to expand the data center capacity of industry leader OpenAI.
The Wall Street Journal reports that France has 57 nuclear reactors at 18 separate plants, generating two-thirds of its national energy supply from nuclear, a clean energy source. Additionally, it had surplus energy last year, which it exported.People look at Linda Dounia Rebeiz's 14° 40′ 34.46″ N 17° 26′ 15.14″ W, which is displayed during a preview for a first-ever AI-dedicated art sale at Christie's Auctions in New York City, on Feb. 5, 2025.
The esteemed art auction house Christie’s will hold its first-ever show dedicated solely to AI-generated art later this month.
The event, called “Augmented Intelligence” will kick off on Feb. 20 and conclude on March 5. It will feature art from the 1960s artist and programmer Harold Cohen, as well as contemporary artists including Pindar Van Arman and Holly Herndon. An autonomous robot made by Alexander Reben will paint live during the show in Christie’s galleries in New York’s Rockefeller Center.
But the intersection of art and AI is a source of tension in the art world as many believe that popular AI developers have improperly trained their image-generation models on copyrighted artworks. More than 3,000 artists signed a petition calling for Christie’s to cancel the show. “Many of the artworks you plan to auction were created using AI models that are known to be trained on copyrighted work without a license,” the petition states. “These models, and the companies behind them, exploit human artists, using their work without permission or payment to build commercial AI products that compete with them.”
Christie’s defended its sale in a statement to TechCrunch. “The artists represented in this sale all have strong, existing multidisciplinary art practices, some recognized in leading museum collections,” a Christie’s spokesperson said. “The works in this auction are using artificial intelligence to enhance their bodies of work and in most cases, AI is being employed in a controlled manner, with data trained on the artists’ own inputs.”Elon Musk is leading a contingent of investors seeking to buy OpenAI, the developer of ChatGPT.
The group, which also includes the firms Valor Equity Partners, Baron Capital, Atreides Management, Vy Capital, and 8VC, reportedly offered $97.4 billion to buy OpenAI. The plan: To buy the biggest name in AI and merge it with Musk’s own AI firm, xAI, which makes the chatbot Grok.
This bid comes as Musk is taking a prominent role in the Trump administration and could help dictate the direction of AI investment in the country. Sam Altman has also sought to get into Trump’s good graces, despite being a longtime Democratic donor, standing by Trump last month to announce Stargate, a $500 billion AI infrastructure project.
Altman is also attempting to convert the nonprofit OpenAI to a for-profit company. In doing so, OpenAI is expected to soon close a historic funding round led by the Japanese investment house SoftBank, which could value OpenAI around $300 billion. Not only would that make OpenAI the most valuable privately held company in the world, but it’d also make Musk and Co.’s offer a serious lowball. However, Musk’s offer could complicate OpenAI’s attempts to establish a fair value for an untraditionally structured corporate entity.
Altman responded to the offer on X, which Musk owns. “No thank you but we will buy twitter for $9.74 billion if you want,” he said. In response, Musk called Altman “Scam Altman” and has previously claimed the company does not have the investment it’s claiming for Stargate, a rare point of tension between Musk and Trump, who heralded the deal.
Silicon Valley is taking center stage in the Trump administration, but two of the loudest voices in Trump’s ear — at least on AI — are in an increasingly hostile spat.
Hard Numbers: Amazon’s spending blitz, Cal State gives everyone ChatGPT, a $50 AI model, France and UAE shake hands
The Amazon logo is being displayed on a smartphone in this photo illustration in Brussels, Belgium, on June 10, 2024.
500,000: More than half a million new people will gain access to a specialized version of ChatGPT after OpenAI struck a deal with California State University, which has 460,000 students and 63,000 faculty members across 23 campuses. Students and faculty will be able to use a specialized version of the chatbot that can assist with tutoring, study guides, and administrative tasks for staff. The price of the deal is unclear.
50: Researchers at Stanford University and the University of Washington trained a large language model they say is capable of “reasoning” like the higher-end models from OpenAI and Anthropic. The catch? They did it while spending only $50 in compute credits. The new model, called s1, is “distilled” from a Google model called Gemini 2.0 Flash Thinking Experimental, a process that allows training fine-tuned models based on larger ones.
1: France and the United Arab Emirates struck a deal to develop a 1 gigawatt AI data center on Thursday, ahead of the Artificial Intelligence Action Summit in Paris. It’s unclear where the data center will be located, but the agreement means that it will serve both French and Emirati AI efforts.
US Vice President JD Vance delivers a speech during the plenary session of the Artificial Intelligence Action Summit at the Grand Palais in Paris, France, on Feb. 11, 2025.
Speaking at the AI Action Summit in Paris, France, US Vice President JD Vance on Tuesday laid out a vision of technological innovation above all — especially above regulation or international accords.
“I’m not here this morning to talk about AI safety, which was the title of the conference a couple of years ago. I’m here to talk about AI opportunity,” Vance said. “We believe that excessive regulation of the AI sector could kill a transformative industry.” The vice president told a group of heads of state that the regulations that the European Union has placed on tech, including the Digital Services Act and AI Act, have been onerous.
Additionally, the US and UK did not sign onto a new international agreement put forward at the summit — which China, India, and France agreed to. The accord lays out norms for AI safety and sustainable energy use.
Europe already achieved first-mover status in regulating artificial intelligence software, largely a Silicon Valley export. But the Trump administration has signaled that the gap between America’s hands-off approach to AI and Europe’s hands-on attempt to rein it in will only widen in the coming years.