What is “safe” superintelligence?

​Ilya Sutskever, co-Founder and Chief Scientist of OpenAI speaks during a talk at Tel Aviv University in Tel Aviv, Israel June 5, 2023.
Ilya Sutskever, co-Founder and Chief Scientist of OpenAI speaks during a talk at Tel Aviv University in Tel Aviv, Israel June 5, 2023.
REUTERS/Amir Cohen

OpenAI co-founder and chief scientist Ilya Sutskever has announced a new startup called Safe Superintelligence. You might remember Sutskever as one of the board members who unsuccessfully tried to oust Sam Altman last November. He has since apologized and hung around OpenAI before departing in May.

Little is known about the new company — including how it’s funded — but its name has inspired debate about what’s involved in building a safe superintelligent AI system. “By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” Sutskever disclosed. (‘Trust and safety’ is typically what internet companies call their content moderation teams.)

Sutskever said that he won’t actually build products en route to superintelligence — so no ChatGPT competitor is coming your way.

“This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” Sutskever told Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.”

Sutskever also hasn’t said what exactly he wants this superintelligence to do though he said he wants it to be more than a smart conversationalist and to help people with more ambitious tasks. But building the underlying tech and keeping it “safe” seems to be his only stated priority.

Sutskever’s view is still rather existentialist — as in, will the AI kill us all or not? Is it still a safe system if it perpetuates racial bias, hallucinates answers, or deceives users? Surely there should be better safeguards than,“Keep the AI away from our nukes!”

More from GZERO Media

Ambassador Robert Wood of the US raises his hand to vote against the ceasefire resolution at the United Nations Security Council, on November 20, 2024.
Lev Radin/Sipa USA, via Reuters
- YouTube

Ukraine has launched US-made long-range missiles into Russia for the first time. Will this change the course of the war? How likely will Trump be able to carry out mass deportations when he's in office? Will there be political fallout from Hong Kong's decision to jail pro-democracy activists? Ian Bremmer shares his insights on global politics this week on World In :60.

A man rushes past members of security forces during clashes between gangs and security forces, in Port-au-Prince, Haiti November 11, 2024.
REUTERS/Marckinson Pierre

The UN Humanitarian Air Service is scheduled to restart flights to Haiti on Wednesday, a week after several planes attempting to land at Port-au-Prince airport came under small arms fire.

People hold signs reading "Trump, we will not pay for the wall" and "Trump, stop the mass deportations" near the border fence between Mexico and the U.S., in Tijuana, Mexico March 13, 2018.
REUTERS/Edgard Garrido

Donald Trump responded “TRUE!!!” to a post on Monday predicting that he would declare illegal immigration a national emergency in order to deploy the military to deport migrants.

Russian President Vladimir Putin chairs a meeting on the situation in Belgorod, Kursk, and Bryansk regions following an incursion of Ukrainian troops, in August 2024.
Sputnik/Gavriil Grigorov/Pool via REUTERS

The long-prepared move came just hours after Ukraine launched US-made ATACMS long-range missiles into Russia for the first time. Are we headed towards a major escalation?