Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Hard Numbers: Sutskever’s easy billion, OpenAI gets expensive, Getting AI out of the immigration system, Voice actors strike a deal
1 billion: OpenAI cofounder Ilya Sutskever has raised $1 billion for his new AI startup Safe Superintelligence, which has promised to deliver a highly advanced AI model without the distraction of short- or medium-term product launches. The company only has 10 employees so far, but it has already raised that sum from eager investors, including Andreessen Horowitz and Sequoia Capital.
2,000: OpenAI is reportedly considering a $2,000 per month subscription for its forthcoming large language models, Strawberry and Orion. Its current top model, GPT-4, is free for limited usage and $20 per month for increased usage and extra features. It’s still unclear what the new models will cost when they’re released later this fall — or, if they’re costly, whether consumers will be willing to spend that much.
141: A group of 141 organizations, including the Electronic Frontier Foundation, sent a letter to the Department of Homeland Security urging it to stop using AI tools in the immigration system and to comply with federal rules around protecting civil rights and avoiding algorithmic errors. The groups requested transparency around how the department uses AI to make immigration and asylum decisions, as well as biometric surveillance of migrants at the border.
80: Voice actors reached an agreement with the producers of 80 video games last week, after striking for two months. SAG-AFTRA, the actor’s union, won new protections against “exploitative uses” of AI. That said, it’s still striking against most of the larger video game studios, including Electronic Arts, as well as Walt Disney and Warner Bros.’s game studios.What is “safe” superintelligence?
OpenAI co-founder and chief scientist Ilya Sutskever has announced a new startup called Safe Superintelligence. You might remember Sutskever as one of the board members who unsuccessfully tried to oust Sam Altman last November. He has since apologized and hung around OpenAI before departing in May.
Little is known about the new company — including how it’s funded — but its name has inspired debate about what’s involved in building a safe superintelligent AI system. “By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” Sutskever disclosed. (‘Trust and safety’ is typically what internet companies call their content moderation teams.)
Sutskever said that he won’t actually build products en route to superintelligence — so no ChatGPT competitor is coming your way.
“This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” Sutskever told Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.”
Sutskever also hasn’t said what exactly he wants this superintelligence to do though he said he wants it to be more than a smart conversationalist and to help people with more ambitious tasks. But building the underlying tech and keeping it “safe” seems to be his only stated priority.
Sutskever’s view is still rather existentialist — as in, will the AI kill us all or not? Is it still a safe system if it perpetuates racial bias, hallucinates answers, or deceives users? Surely there should be better safeguards than,“Keep the AI away from our nukes!”