Contributing Writer
https://x.com/ScottNover
https://www.linkedin.com/in/scottnover/
Scott Nover
Contributing Writer
Scott Nover is the lead writer for GZERO AI. He's a contributing writer for Slate and was previously a staff writer at Quartz and Adweek. His writing has appeared in The Atlantic, Fast Company, Vox.com, and The Washington Post, among other outlets. He currently lives near Washington, DC, with his wife and pup.
Feb 13, 2024
The Biden administration has created a new body to tackle the threats of AI: the US AI Safety Institute Consortium. The group of 200 AI “stakeholders” led by the Commerce Department and the National Institute of Standards and Technology is tasked with the “development and deployment of safe and trustworthy artificial intelligence.” The group will advise on many of the priorities of Biden’s October 2023 executive order on AI, on matters including “red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.”
The group includes large tech companies like Amazon, Meta, and Microsoft; AI-focused startups like Anthropic and OpenAI; along with government contractors, advocacy groups, research labs, and universities.
The Biden administration, which is working to implement the many provisions of the executive order, previously secured voluntary commitments from major AI firms to mitigate the worst harms possible in the development of AI.
While the government is slow to pass laws and implement executive action, engaging with the private sector directly can be a productive first step toward rolling out a new regulatory regime to rein in this emerging set of technologies. The administration recently met a series of deadlines from the wide-ranging order and has begun to offer updates, such as the new know-your-customer rules for AI firms.