The chief scientist at OpenAI who nearly brought down CEO Sam Altman in a failed November mutiny that was as brief as it was spectacular is launching his own AI company.
Ilya Sutskever revealed on Wednesday that he was teaming up with his OpenAI colleague Daniel Levy and Daniel Gross, a former AI executive has Appleto found Safe Superintelligence Inc., a moniker chosen to reflect its purpose.
“SSI is our mission, our name and our entire product roadmap, because it is our sole purpose. » the three wrote in a statement on the barebones site of the American startup. Building a secure superintelligence, they continued, was “the most important technical problem of our time.”
Artificial superintelligence, or ASI, is considered the ultimate breakthrough in AI, as experts predict that machines will continue to develop once they achieve the type of general-purpose intelligence known as of AGI, comparable to that of humans.
I am creating a new business: https://t.co/BG3K3SI3A1
– Ilya Sutskever (@ilyasut) June 19, 2024
Leading authorities in the field, such as computer scientist Geoffrey Hinton, believe that ASI poses an existential danger to humanity and that implementing protective measures consistent with our interests as a species was one of the priorities. main missions Sutskever had at OpenAI.
Her highly publicized departure in May, almost six months to the day, he joined independent directors Helen Toner, Tasha McCauley and Adam D’Angelo to remove Altman as CEO against the wishes of Chairman Greg Brockman, who immediately resigned.
Sutskever came to regret his role in Altman’s brief ouster
The spectacular coup, which Toner recently blamed on a pattern of deception by Altman, threatened to destroy the company. Sutskever quickly expressed regret and reversed his position, demanding that Altman be reinstated to avoid OpenAI’s potential downfall.
I deeply regret my participation in the actions of the board of directors. It was never my intention to harm OpenAI. I love everything we have built together and will do everything in my power to bring the company together.
– Ilya Sutskever (@ilyasut) November 20, 2023
In the consequencesToner and McCauley left the nonprofit’s board of directors, while Sutskever seemingly disappeared from the public eye until he announced his departure last month.
In his resignation announcement, he hinted that he would voluntarily engage in a project that was “very personally meaningful to me” and promised to share details at a later, unspecified date.
His departure, however, triggered events that quickly revealed deep governance problems, which appeared to confirm the board’s initial suspicions.
First, Sutskever co-head Jan Leike accused the company of break your promise give their AI security team 20% of its compute resources and resigned. Later it emerged that OpenAI employees had been slapped. watertight gag orders which prohibited them from criticizing the company after their departure, under penalty of lose their acquired shares.
Finally, actress Scarlett Johansson, who played an AI chatbot in Spike Jonze’s 2013 sci-fi film. Her– then sued the company claiming that Altman was indeed stole his voice to use for their latest AI product. OpenAI refuted this claim but still committed to changing the sound out of respect for his wishes.
After almost a decade, I made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I am confident that OpenAI will create an AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the…
– Ilya Sutskever (@ilyasut) May 14, 2024
These cases suggest that OpenAI abandoned its original goal of developing AI that would benefit all of humanity, instead of pursuing commercial success.
“People interested in security like Ilya Sutskever wanted significant resources devoted to security, while people interested in profits like Sam Altman did not,” Hinton said. Bloomberg last week.
A leader in the field since AI’s Big Bang Moment
Sutskever has long been one of the brightest minds in the field of AI, researching artificial neural networks that conceptually mimic the human brain to train computers to learn and abstract from data.
In 2012, he teamed up with Hinton collaborate on the historic development in 2012 of Alex Krizhevsky’s deep neural network AlexNet, commonly considered the Big Bang moment of AI. It was the first machine learning algorithm that could accurately label images fed to it, revolutionizing the field of computer vision.
When OpenAI was founded in December 2015, Sutskever was given the top spot ahead of co-presidents Altman and Elon Musk, even though he was only a research director. This made sense at the time, as the company was originally created as a non-profit organization that would create value for everyone rather than shareholders, prioritizing “good result for all rather than for one’s own interest.”
Since then, however, OpenAI has effectively become a commercial enterprise, in Altman’s words: “to pay the bills for its computationally intensive operations. In doing so, he adopted a complicated structure with a new for-profit entity where returns were capped for investors like Microsoft and Khosla Ventures, but control remained in the hands of the nonprofit board of directors.
Altman called this convoluted governance necessary at the time in order to keep everyone on board. Recently Information reported he sought to change the legal structure of OpenAI, opening the door to a controversial Initial Public Offering.
Sutskever’s new business venture dedicated to secure superintelligence will be located in Palo Alto in Silicon Valley and Tel Aviv, Israel, to best recruit top talent.
“Our team, our investors, and our business model are all aligned to achieve SSI,” they wrote, promising that there would be “no distractions from management overhead or product cycles.”
How he and his two co-founders aim to both create an ASI with robust safeguards while paying the bills and achieving a return on investment for their investors, however, is not immediately clear from the statement. For example, it was not revealed whether it was also a capped for-profit structure.
They simply stated that Safe Superintelligence’s business model was designed from the outset to be “insulated from short-term commercial pressures.”