Ilya Sutskever Launches Safety-Centric AI Startup
Ilya Sutskever, former OpenAI co-founder, announces the creation of Safe Superintelligence Inc., aimed at safely developing superintelligent AI. He left OpenAI amid internal conflicts over AI safety versus business priorities. The new venture promises to prioritize safety insulated from commercial pressures.
- Country:
- United States
Ilya Sutskever, a key figure in the establishment of OpenAI, has embarked on a new venture focusing exclusively on AI safety. This comes in the wake of an attempted ouster of OpenAI CEO Sam Altman, which Sutskever was involved in but later regretted.
In his latest move, Sutskever has co-founded Safe Superintelligence Inc. along with Daniel Gross and Daniel Levy, with the singular mission of ensuring the safe development of AI systems that surpass human intelligence. The company, set up in Palo Alto, California, and Tel Aviv, aims to bypass "management overhead or product cycles" and insulate safety work from "short-term commercial pressures".
Previously at OpenAI, Sutskever was engaged in efforts to develop artificial general intelligence (AGI) safely. His departure from OpenAI, followed closely by his team co-leader Jan Leike's resignation, marked a significant shift in focus towards independently fostering AI safety.
(This story has not been edited by Devdiscourse staff and is auto-generated from a syndicated feed.)
ALSO READ
Maharashtra Ministers Plead with Fasting OBC Activists to End Agitation
ID8NXT Presents a Nationwide Hybrid Hackathon in Collaboration with Public Sector Bank Bank of Baroda, Focused on Generative Artificial Intelligence
Tech Summit on Digital Marketing with AI in Punjab Reveals 82 Percent Spike in Artificial Intelligence Job Openings