Former OpenAI Chief Ilya Sutskever Launches Safe Superintelligence Inc.

superintelligence
superintelligence

Former OpenAI Chief Ilya Sutskever Launches Safe Superintelligence Inc.

Insider Brief

  • Ilya Sutskever, former OpenAI co-founder, announced the formation of Safe Superintelligence Inc.
  • The new venture aims to tackle the technical challenge of building safe superintelligence.
  • The company intends to advance capabilities swiftly while ensuring that safety measures remain a step ahead.

Ilya Sutskever, former OpenAI co-founder, announced the formation of Safe Superintelligence Inc. (SSI) via a company statement. The new venture aims to tackle the technical challenge of building safe superintelligence.

Sutskever highlighted that SSI is the world’s first dedicated lab focused solely on developing safe superintelligence. “Building safe superintelligence (SSI) is the most important technical problem of our time,” Sutskever wrote in the statement. He emphasized that the company’s mission, name, and entire product roadmap are dedicated to this singular goal.

Superintelligence refers to an artificial intelligence that surpasses human intelligence and capability in all aspects, including creativity, problem-solving, and emotional intelligence. It represents a level of AI that can outperform the best human minds in every field, leading to unprecedented advancements.

On one hand superintelligence has the potential to solve complex global challenges, from climate change to curing diseases. However, ensuring that this technology is developed safely is crucial, as it poses significant ethical and existential risks if not properly managed.

SSI’s approach integrates safety and capabilities as concurrent technical challenges, solved through revolutionary engineering and scientific breakthroughs. Sutskever noted that the company intends to advance capabilities swiftly while ensuring that safety measures remain a step ahead.

“This way, we can scale in peace,” he wrote.

The company’s business model is designed to prioritize safety, security, and progress, free from short-term commercial pressures. With offices in Palo Alto and Tel Aviv, SSI aims to leverage its deep roots in these regions to recruit top technical talent.

Sutskever announced that the company is building a “lean, cracked team” of the world’s best engineers and researchers focused solely on safe superintelligence.

“Our singular focus means no distraction by management overhead or product cycles,” he wrote, emphasizing the company’s streamlined approach.

Joining Sutskever in this venture are Daniel Gross and Daniel Levy, who, along with Sutskever, are rallying top talent to join their mission.

“If that’s you, we offer an opportunity to do your life’s work and help solve the most important technical challenge of our age,” the statement concluded.