Ilya Sutskever Launches Safety-Centric AI Startup

Ilya Sutskever, former OpenAI co-founder, announces the creation of Safe Superintelligence Inc., aimed at safely developing superintelligent AI. He left OpenAI amid internal conflicts over AI safety versus business priorities. The new venture promises to prioritize safety insulated from commercial pressures.


PTI | Washington DC | Updated: 20-06-2024 16:27 IST | Created: 20-06-2024 16:27 IST
Ilya Sutskever Launches Safety-Centric AI Startup
Ilya Sutskever
  • Country:
  • United States

Ilya Sutskever, a key figure in the establishment of OpenAI, has embarked on a new venture focusing exclusively on AI safety. This comes in the wake of an attempted ouster of OpenAI CEO Sam Altman, which Sutskever was involved in but later regretted.

In his latest move, Sutskever has co-founded Safe Superintelligence Inc. along with Daniel Gross and Daniel Levy, with the singular mission of ensuring the safe development of AI systems that surpass human intelligence. The company, set up in Palo Alto, California, and Tel Aviv, aims to bypass "management overhead or product cycles" and insulate safety work from "short-term commercial pressures".

Previously at OpenAI, Sutskever was engaged in efforts to develop artificial general intelligence (AGI) safely. His departure from OpenAI, followed closely by his team co-leader Jan Leike's resignation, marked a significant shift in focus towards independently fostering AI safety.

(This story has not been edited by Devdiscourse staff and is auto-generated from a syndicated feed.)

Give Feedback