Search
AI-powered search, human-powered content.
scroll to top arrow or icon

Biden preaches AI safety

President Joe Biden walks across the stage to sign an executive order about artificial intelligence at the White House on Oct. 30, 2023.

President Joe Biden walks across the stage to sign an executive order about artificial intelligence at the White House on Oct. 30, 2023.

REUTERS/Leah Millis/File Photo
Contributing Writer
https://x.com/ScottNover
https://www.linkedin.com/in/scottnover/
The Biden administration has created a new body to tackle the threats of AI: the US AI Safety Institute Consortium. The group of 200 AI “stakeholders” led by the Commerce Department and the National Institute of Standards and Technology is tasked with the “development and deployment of safe and trustworthy artificial intelligence.” The group will advise on many of the priorities of Biden’s October 2023 executive order on AI, on matters including “red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.”

The group includes large tech companies like Amazon, Meta, and Microsoft; AI-focused startups like Anthropic and OpenAI; along with government contractors, advocacy groups, research labs, and universities.

The Biden administration, which is working to implement the many provisions of the executive order, previously secured voluntary commitments from major AI firms to mitigate the worst harms possible in the development of AI.

While the government is slow to pass laws and implement executive action, engaging with the private sector directly can be a productive first step toward rolling out a new regulatory regime to rein in this emerging set of technologies. The administration recently met a series of deadlines from the wide-ranging order and has begun to offer updates, such as the new know-your-customer rules for AI firms.