What is “safe” superintelligence?

​Ilya Sutskever, co-Founder and Chief Scientist of OpenAI speaks during a talk at Tel Aviv University in Tel Aviv, Israel June 5, 2023.
Ilya Sutskever, co-Founder and Chief Scientist of OpenAI speaks during a talk at Tel Aviv University in Tel Aviv, Israel June 5, 2023.
REUTERS/Amir Cohen

OpenAI co-founder and chief scientist Ilya Sutskever has announced a new startup called Safe Superintelligence. You might remember Sutskever as one of the board members who unsuccessfully tried to oust Sam Altman last November. He has since apologized and hung around OpenAI before departing in May.

Little is known about the new company — including how it’s funded — but its name has inspired debate about what’s involved in building a safe superintelligent AI system. “By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” Sutskever disclosed. (‘Trust and safety’ is typically what internet companies call their content moderation teams.)

Sutskever said that he won’t actually build products en route to superintelligence — so no ChatGPT competitor is coming your way.

“This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” Sutskever told Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.”

Sutskever also hasn’t said what exactly he wants this superintelligence to do though he said he wants it to be more than a smart conversationalist and to help people with more ambitious tasks. But building the underlying tech and keeping it “safe” seems to be his only stated priority.

Sutskever’s view is still rather existentialist — as in, will the AI kill us all or not? Is it still a safe system if it perpetuates racial bias, hallucinates answers, or deceives users? Surely there should be better safeguards than,“Keep the AI away from our nukes!”

More from GZERO Media

Tristan Harris, co-founder of the Center for Humane Technology, joins Ian Bremmer on the GZERO World Podcast to talk about the risks of recklessly rolling out powerful AI tools without guardrails as big tech firms race to build “god in a box.”

- YouTube

The next leap in artificial intelligence is physical. On Ian Explains, Ian Bremmer breaks down how robots and autonomous machines will transform daily life, if we can manage the risks that come with them.

Britain's Prime Minister Keir Starmer is flanked by Ukraine's President Volodymyr Zelenskiy and NATO Secretary-General Mark Rutte, Denmark's Prime Minister Mette Frederiksen and Dutch Prime Minister Dick Schoof as he hosts a 'Coalition of the Willing' meeting of international partners on Ukraine at the Foreign, Commonwealth, and Development Office (FCDO) in London, Britain, October 24, 2025.
Henry Nicholls/Pool via REUTERS

As we race toward the end of 2025, voters in over a dozen countries will head to the polls for elections that have major implications for their populations and political movements globally.