Search
AI-powered search, human-powered content.
scroll to top arrow or icon

{{ subpage.title }}

Midjourney

Will Donald Trump let AI companies run wild?

Artificial intelligence was not a primary focus of the US presidential campaign for either Donald Trump or Kamala Harris, and AI-generated disinformation did not disrupt election proceedings like many experts feared. Still, with Republicans looking set for a clean sweep of the White House and both chambers of Congress, the election results have major implications for the future of AI. Simply put, Republican control of government augurs that — at least for the next two years before the midterm elections in 2026 — AI companies may be able to run wild without fear of significant regulatory intervention.
Read moreShow less

FILE PHOTO: California Governor Gavin Newsom (D) reacts as he speaks to the members of the press on the day of the first presidential debate hosted by CNN in Atlanta, Georgia, U.S., June 27, 2024.

REUTERS/Marco Bello/File Photo

Gov. Gavin Newsom vetoes California’s AI safety bill

California Gov. Gavin Newsom on Sunday vetoed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, the AI safety bill passed by the state’s legislature in August.

Newsom has signed other AI-related bills into law, such as two recent measures protecting performers from AI deepfakes of their likenesses, but vetoed this one over concerns about the focus of the would-be law.

Read moreShow less

Commerce Secretary Gina Raimondo arrives to a Senate Appropriations Subcommittee on Commerce, Justice, Science, and Related Agencies hearing on expanding broadband access on Capitol Hill in Washington, D.C., U.S. February 1, 2022.

Sarah Silbiger/Pool via REUTERS

National safety institutes — assemble!

The Biden administration announced that it will host a global safety summit on artificial intelligence on Nov. 20-21 in San Francisco. The International Network of AI Safety Institutes, which was formed at the AI Safety Summit in Seoul in May, will bring together safety experts from each member country’s AI safety institute. The current member countries are Australia, Canada, the European Union, France, Japan, Kenya, Singapore, South Korea, the United Kingdom, and the United States.

The aim? “Strengthening international collaboration on AI safety is critical to harnessing AI technology to solve the world’s greatest challenges,” Secretary of State Antony Blinken said in a statement.

Commerce Secretary Gina Raimondo, co-hosting the event with Blinken, said that the US is committed to “pulling every lever” on AI regulation. “That includes close, thoughtful coordination with our allies and like-minded partners.”
Courtesy of Midjourney

What do Democrats want for AI?

At last week’s Democratic National Convention, the Democratic Party and its newly minted presidential candidate, Vice President Kamala Harris, made little reference to technology policy or artificial intelligence. But the party’s platform and a few key mentions at the DNC show how a Harris administration would handle AI.

In the official party platform, there are three mentions of AI: First, it says Democrats will support historic federal investments in research and development, break “new frontiers of science,” and create jobs in artificial intelligence among other sectors. It also says it will invest in “technology and forces that meet the threats of the future,” including artificial intelligence and unmanned systems.

Read moreShow less
Courtesy of Midjourney

California wants to prevent an AI “catastrophe”

The Golden State may be close to passing AI safety regulation — and Silicon Valley isn’t pleased.

The proposed AI safety bill, SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, aims to establish “common sense safety standards” for powerful AI models.

The bill would require companies developing high-powered AI models to implement safety measures, conduct rigorous testing, and provide assurances against "critical harms," such as the use of models to execute mass-casualty events and cyberattacks that lead to $500 million in damages. It warns that the California attorney general can take civil action against violators, though rules would only apply to models that cost $100 million to train and pass a certain computing threshold.

Read moreShow less
text
Photo by Erik Mclean on Unsplash

The FEC kicks AI down the road

The US Federal Election Commission will not regulate deepfakes in political ads before November’s elections. Last week, Republican commissioners effectively killed the proposal to do so, writing in a memo that they believed such rulemaking would exceed the commission’s authority under the law. Additionally, Chairman Sean Cooksey told Axios on Aug. 8 that the FEC will not consider additional rules before the election.
Read moreShow less

An Apple logo is pictured in an Apple store in Paris, France.

REUTERS/Gonzalo Fuentes/File Photo

Apple signs Joe Biden’s pledge

Apple signed on to the Biden administration’s voluntary pledge for artificial intelligence companies on July 26.

President Joe Biden and Vice President Kamala Harrisfirst announced that they secured commitments from seven major AI developers — Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI — a year ago in what the administration says laid the groundwork for its executive order on AI adopted in October. The voluntary commitments included safety testing, information sharing on safety risks (with government, academia, and civil society groups), cybersecurity investments, watermarking systems AI-generated content, and a general agreement to “develop and deploy advanced AI systems to help address society’s greatest challenges.”

Until now, Apple wasn’t on the list. Now, as Apple prepares to release new AI-enabled iPhones (powered by OpenAI’s systems as well as its own), the Cupertino-based tech giant is playing nice with the Biden administration, signaling that they’ll be a responsible actor, even without formal legislation on the books.

Courtesy of Midjourney

What Sam Altman wants from Washington

In a July 25 Washington Post op-ed, OpenAI cofounder and CEO Sam Altman laid out the stakes for the global artificial intelligence landscape: a race between democratic and authoritarian visions — the United States vs. China and Russia. Altman argues that continued US leadership in AI development is crucial to ensure the technology benefits all Americans rather than become concentrated in the hands of authoritarian regimes.
Read moreShow less

Subscribe to our free newsletter, GZERO Daily

Latest