The Feds vs. California: Inside the twin efforts to regulate AI in the US

Midjourney

Silicon Valley is home to the world’s most influential artificial intelligence companies. But there’s currently a split approach between the Golden State and Washington, DC, over how to regulate this emerging technology.

The federal approach is relatively hands-off. After Joe Biden’s administration persuaded leading AI companies to sign a voluntary pledge in July 2023 to mitigate risks posed by AI, it issued a sweeping executive order on artificial intelligence in October 2023. That order commanded federal agencies and departments to begin writing rules and explore how they can incorporate AI to improve their current work. The administration also signed onto the UK’s Bletchley Declaration, a multi-country commitment to develop and deploy AI in a way that’s “human-centric, trustworthy, and responsible.” In April, the White House clarified that under the executive order, agencies have until December to “assess, test, and monitor” the impact of AI on their work, mitigate algorithmic discrimination, and provide transparency into how they’re using AI.

But perhaps its biggest win came on Aug. 29 when OpenAI and Anthropic voluntarily agreed to share their new models with the government so officials can safety-test them before they’re released to the public. The models will be shared with the US AI Safety Institute, housed under the Commerce Department’s National Institute of Standards and Technology, or NIST.

“We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models,” OpenAI CEO Sam Altmanwrote on X. “For many reasons, we think it’s important that this happens at the national level. US needs to continue to lead!”

Altman’s clarification that regulation should happen at the national level implied an additional rebuke of how California seeks to regulate the company and its tech.

Brian Albrecht, the chief economist at the International Center for Law & Economics, was not surprised by the companies’ willingness to share their models with the government. “This is a very standard response to expected regulation,” Albrecht said. “And it’s always tough to know how voluntary any of this is.”

But Dean Ball, a research fellow at the libertarian think tank Mercatus Center, said he’s concerned about the opacity of these arrangements. “We do not know what level of access the federal government is being given, whether the federal government has the ability to request that model releases be delayed, and many other specific details,” Ball said. “This is not the way lawmaking is supposed to work in America; having private arrangements worked out between providers of transformative technology and the federal government is a troubling step in AI policy.”

Still, these appear to be relatively light-touch measures that counter California’s proposed approach to regulating artificial intelligence.

On Aug. 28, the state’s legislature passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, which aims to establish “common sense safety standards” for powerful AI models. Written by California State Sen. Scott Wiener, and supported by AI pioneers like Geoffrey Hinton and Yoshua Bengio, the bill has divided Silicon Valley companies. Albrecht said that what’s been proposed by California is much closer to the European model of AI regulation — the EU’s AI Act passed in March — while Washington hasn’t yet adopted a unified view on how the technology should be regulated.

Critics of the bill include OpenAI, California’s Chamber of Commerce, and even former Speaker of the House Nancy Pelosi. “While we want California to lead in AI in a way that protects consumers, data, intellectual property, and more, SB 1047 is more harmful than helpful in that pursuit,” Pelosi said in a recent statement. In a recent edition of GZERO AI, experts from the Electronic Frontier Foundation at the Atlantic Council expressed concerns about the bill’s so-called “kill switch” and how it could stifle open-source AI development.

Some industry players have been more open to the bill. Anthropic said the bill’s benefits likely outweigh its risks and Tesla CEO Elon Musk, who has an AI startup of his own called xAI, said California should “probably” pass the bill.

It’s still unclear whether Gov. Gavin Newsom will sign the bill — he has until Sept. 30 to do so. He has not signaled his view on the legislation, but in May, he warned about the risk of overregulating AI.

“I don’t want to cede this space to other states or other countries,” Newsom said at an event in San Francisco. “If we overregulate, if we overindulge, if we chase the shiny object, we could put ourselves in a perilous position.”

More from GZERO Media

Listen: President Trump has already made sweeping changes to US public health policy—from RFK Jr.’s nomination to lead the health department to withdrawing the US from the World Health Organization. On the GZERO World Podcast, New York Times science and global health reporter Apoorva Mandavilli joins Ian Bremmer for an in-depth look at health policy in the Trump administration, and what it could mean, not just for the US, but for the rest of the world.

Elon Musk walks on Capitol Hill on the day of a meeting with Senate Republican Leader-elect John Thune (R-SD), in Washington, U.S. December 5, 2024.

REUTERS/Benoit Tessier

As the deadline for federal employees to resign in exchange for eight months of pay closed in on Thursday, a federal judge in Massachusetts stepped in and temporarily blocked it. Judge George A. O’Toole Jr. ordered that a hearing be held on Monday afternoon. In response, the Office of Personnel Management – the agency Elon Musk has harnessed to carry out the Department of Government Efficiency’s efforts to downsize the government – has postponed the deadline until Monday.

Demonstrators attend a protest against U.S. President Donald Trump's plan to resettle Palestinians from Gaza, in front of the U.S. consulate in Istanbul, Turkey, February 6, 2025.
REUTERS/Umit Bektas

President Donald Trump on Thursday doubled down on his proposal to remove Palestinians from Gaza for resettlement, insisting that Israel will give the territory to the US, with no military intervention required. He then imposed sanctions on the International Criminal Court for having issued an arrest warrant last year against Israeli leader Benjamin Netanyahu.

Annie Gugliotta

Is this the end of American soft power and, if so, how should allies respond? GZERO Publisher Evan Solomon explores the shuttering of USAID and the tariff taunts between the US and Canada.

Be sure to catch next week’s groundbreaking discussions on new technologies for global energy security in disruptive times live from the MSC Energy Security Hub at the BMW Foundation Herbert Quandt Pavilion. On Friday, Feb. 1: See the exclusive keynote by Fatih Birol, executive director of International Energy Agency, entitled “Europe’s Energy Power Struggle: Rising Demand and a New Competitive Landscape”, Join an expert panel as they discuss “Net Zero for Global Security? Geopolitics of Energy Transition and Hydrogen Trade,” featuring Leila Benali (Minister of Energy Transition and Sustainable Development of Morocco), Jennifer Morgan (State Secretary and Special Envoy for International Climate Action, German Federal Foreign Office), Rainer Quitzow (professor for Sustainability and Innovation, TU Berlin), Katherina Reiche (CEO, Westenergie AG; Chairwoman, National Hydrogen Council), Narendra Taneja (energy expert & chairman, Independent Energy Policy Institute). Saturday, Feb. 15 “Shaping Tomorrow’s Renewable Energy Paradigm in Times of Uncertainty,” the keynote by William Chueh, director, Precourt Institute for Energy, associate professor of materials science and engineering, Stanford University Plus many more panels and fireside chats. If you’re eager to explore how nations can boost their competitiveness, strengthen their economies, and create a future-proof society, sign up for our free livestream here.