What is “safe” superintelligence?

​Ilya Sutskever, co-Founder and Chief Scientist of OpenAI speaks during a talk at Tel Aviv University in Tel Aviv, Israel June 5, 2023.
Ilya Sutskever, co-Founder and Chief Scientist of OpenAI speaks during a talk at Tel Aviv University in Tel Aviv, Israel June 5, 2023.
REUTERS/Amir Cohen

OpenAI co-founder and chief scientist Ilya Sutskever has announced a new startup called Safe Superintelligence. You might remember Sutskever as one of the board members who unsuccessfully tried to oust Sam Altman last November. He has since apologized and hung around OpenAI before departing in May.

Little is known about the new company — including how it’s funded — but its name has inspired debate about what’s involved in building a safe superintelligent AI system. “By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” Sutskever disclosed. (‘Trust and safety’ is typically what internet companies call their content moderation teams.)

Sutskever said that he won’t actually build products en route to superintelligence — so no ChatGPT competitor is coming your way.

“This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” Sutskever told Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.”

Sutskever also hasn’t said what exactly he wants this superintelligence to do though he said he wants it to be more than a smart conversationalist and to help people with more ambitious tasks. But building the underlying tech and keeping it “safe” seems to be his only stated priority.

Sutskever’s view is still rather existentialist — as in, will the AI kill us all or not? Is it still a safe system if it perpetuates racial bias, hallucinates answers, or deceives users? Surely there should be better safeguards than,“Keep the AI away from our nukes!”

More from GZERO Media

- YouTube

How do we ensure AI is trustworthy in an era of rapid technological change? Baroness Joanna Shields, Executive Chair of the Responsible AI Future Foundation, says it starts with principles of responsible AI and a commitment to ethical development.

October 21, 2025: The owner of this cattle feedlot in Sergeant Bluff, Iowa, USA, used to fly a Trump/Vance flag. The Trump/Vance flag is no longer flying at the feedlot.

Jerry Mennenga/ZUMA Press Wire

These days, US farmers aren’t just worried about the weather jeopardizing their harvests. They’re keeping a close eye on geopolitical storms as well.

The United States is #winning. But while the short-term picture looks strong, the United States is systematically trading long-term strategic advantages for more immediate tactical gains, with the accumulating costs hiding in plain sight.

- YouTube

Who really shapes and influences the development of AI? The creators or the users? Peng Xiao, Group CEO, G42 argues it’s both. “I actually do not subscribe that the creators have so much control they can program every intent into this technology so users can only just respond and be part of that design,” he explains at the 2025 Abu Dhabi Global AI Summit.