Ilya Sutskever Launches AI Safety Venture, Safe Superintelligence Inc.

Ilya Sutskever, co-founder and former chief scientist of OpenAI, has launched a new venture called Safe Superintelligence Inc. (SSI). The launch comes just a month after Sutskever’s departure from OpenAI. The new company is co-founded by Daniel Gross, a former Y Combinator partner and ex-Apple engineer, and Daniel Levy, a former OpenAI engineer.

SSI’s mission is clear from its name: it aims to develop artificial intelligence (AI) with a strong emphasis on safety and advanced capabilities. The company’s focus is on building a safe “superintelligence”, an industry term for a hypothetical system that’s smarter than humans.

Sutskever has been vocal about the challenges of AI safety for a long time. He has previously predicted that AI with intelligence superior to humans could arrive within the decade. When it does, it won’t necessarily be benevolent, necessitating research into ways to control and restrict it.

The company’s approach to safety and capabilities is to solve them through revolutionary engineering and scientific breakthroughs. They plan to advance capabilities as fast as possible while ensuring safety always remains ahead. This way, they can scale in peace. Their singular focus means no distraction by management overhead or product cycles, and their business model means safety, security, and progress are all insulated from short-term commercial pressures.

While many details about the new company remain to be revealed, its founders have one message for those in the industry who are intrigued: They’re hiring.

Read more: techcrunch.com