Ilya Sutskever's New Venture: Safe Superintelligence Aims to Revolutionize AI Safety

Ilya Sutskever, former chief scientist at OpenAI, has launched Safe Superintelligence (SSI) to create safe AI systems surpassing human capabilities. In an interview with Reuters, Sutskever discusses the importance of safety in AI, differing approaches from OpenAI, and the potential for open-sourcing some of their research.


Devdiscourse News Desk | Updated: 05-09-2024 15:39 IST | Created: 05-09-2024 15:39 IST
Ilya Sutskever's New Venture: Safe Superintelligence Aims to Revolutionize AI Safety
Ilya Sutskever

Ilya Sutskever, OpenAI's former chief scientist, has launched a new company called Safe Superintelligence (SSI) with the goal of developing AI systems that far exceed human capabilities while ensuring their safety.

In an exclusive interview with Reuters, Sutskever emphasized SSI's unique approach to AI scaling, which distinguishes it from his previous work at OpenAI. He highlighted the ethical aspects of deploying AI and the intensive research required to define what constitutes safe AI.

SSI's strategy includes potential open-sourcing of some research findings, aiming to contribute significantly to the industry's collective knowledge. Sutskever remains optimistic about the collaborative efforts across different AI companies focused on safety research.

(With inputs from agencies.)

Give Feedback