OpenAI Forms Safety Committee Amid AI Controversy

OpenAI has announced the formation of a safety and security committee as it begins training a new AI model to replace GPT-4. This move follows resignations and criticisms over the company's handling of AI safety. The committee's initial task is to evaluate and recommend safety processes within 90 days.


PTI | Sanfrancisco | Updated: 28-05-2024 19:03 IST | Created: 28-05-2024 19:03 IST
OpenAI Forms Safety Committee Amid AI Controversy
AI Generated Representative Image

OpenAI says it's setting up a safety and security committee and has begun training a new AI model to supplant the GPT-4 system that underpins its ChatGPT chatbot.

The San Francisco startup said in a blog post Tuesday that the committee will advise the full board on "critical safety and security decisions'' for its projects and operations.

The safety committee arrives as debate swirls around AI safety at the company, which was thrust into the spotlight after a researcher, Jan Leike, resigned and levelled criticism at OpenAI for letting safety "take a backseat to shiny products.'' OpenAI co-founder and chief scientist Ilya Sutskever also resigned, and the company disbanded the "superalignment" team focused on AI risks that they jointly led.

OpenAI said it has "recently begun training its next frontier model" and its AI models lead the industry on capability and safety, though it made no mention of the controversy. "We welcome a robust debate at this important moment," the company said.

AI models are prediction systems that are trained on vast datasets to generate on-demand text, images, video and human-like conversation. Frontier models are the most powerful, cutting edge AI systems.

The safety committee is filled with company insiders, including OpenAI CEO Sam Altman and Chairman Bret Taylor, and four OpenAI technical and policy experts. It also includes board members Adam D'Angelo, who's the CEO of Quora, and Nicole Seligman, a former Sony general counsel.

The committee's first job will be to evaluate and further develop OpenAI's processes and safeguards and make its recommendations to the board in 90 days. The company said it will then publicly release the recommendations it's adopting "in a manner that is consistent with safety and security.''

(This story has not been edited by Devdiscourse staff and is auto-generated from a syndicated feed.)

Give Feedback