skip to content

OpenAI co-founder Ilya Sutskever launches new firm focused on developing ‘a safe super AI’

Ilya Sutskever, one of the co-founders of OpenAI, started a new company called Safe Superintelligence Inc. (SSI) just one month after officially leaving OpenAI.

Sutskever, who was OpenAI’s chief scientist, teamed up with former Y Combinator partner Daniel Gross and ex-OpenAI engineer Daniel Levy to establish SSI.

At OpenAI, Sutskever played a crucial role in the company’s efforts to enhance AI safety, especially with the advent of “superintelligent” AI systems. He worked closely with Jan Leike, who co-led OpenAI’s Superalignment team.

However, both Sutskever and Leike left OpenAI in May following a significant disagreement with the company’s leadership over handling AI safety. Leike has since joined Anthropic, another AI-focused company, where he now leads a team.

Sutskever has long advocated for addressing the challenging aspects of AI safety. In a 2023 blog post co-written with Leike, he predicted that AI with intelligence surpassing humans could emerge within the next decade. He emphasized that such AI might not necessarily be friendly and stressed the need for research into methods to control and restrict it.

His commitment to the safety of AI remains strong. On Wednesday, Sutskever announced the formation of his new company, SSI, via a tweet. Sutskever stated that SSI is a mission and the basis for his new organization’s work. He further added that the SSI team, its investors, and its business model are all aligned to achieve safe superintelligence.

Sutskever further added that they approach safety and capabilities in tandem and that technical problems should be solved through revolutionary engineering and scientific breakthroughs.

He added, “We plan to advance capabilities as fast as possible while ensuring our safety always remains ahead. This way, we can scale in peace.” The singular focus means no distraction by management overhead or product cycles. He added that their business model prioritizes safety, security, and progress over short-term commercial pressures.

In an interview with Bloomberg, Sutskever discussed the new company in more detail but did not disclose its funding status or valuation.

Unlike OpenAI, which initially launched as a non-profit in 2015 and was later restructured to accommodate the immense funding required for its computing power, SSI is being designed as a for-profit entity from the start. Given the current interest in AI and the team’s impressive credentials, SSI may soon attract significant capital. Daniel Gross told Bloomberg, “Out of all the problems we face, raising capital is not going to be one of them.”

SSI has established offices in Palo Alto and Tel Aviv and is actively recruiting technical talent.

The company aims to push the boundaries of AI capabilities while ensuring safety measures are always a step ahead. Focusing on revolutionary engineering and scientific breakthroughs, SSI is poised to make significant contributions to the field of AI safety.

The formation of SSI highlights the ongoing debate and concern within the AI community about the potential risks associated with superintelligent AI.

Sutskever’s new venture aims to address these risks head-on, ensuring that the development of AI technologies is both safe and beneficial for society.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed