OpenAI Implements Safety Panel
The panel will bring together experts to discuss the company’s processes and safeguards.
Teralyn Whipple
May 31, 2024 – Sam Altman, CEO of OpenAI, announced on Tuesday the establishment of a new safety panel responsible for handling "crucial safety and security decisions" within the artificial intelligence company, popularly known for its large language learning model, ChatGPT.
The nine person panel, led by Altman, Chairman Bret Taylor, Attorney Nicole Sligman, and Quora CEO Adam D'Angelo, stated that its initial objective is to "evaluate and further develop" OpenAI’s processes and safeguards over the next 90 days. It will provide a public update once recommendations are submitted.
“OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI," the company said. "While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment."
In a public statement, Jan Leike, former researcher at OpenAI who resigned in March, said that the company was neglecting safety concerns, saying that safety had “taken a backseat to shiny products” at the company.
The panel, called the Safety and Security Committee, will consist of various cybersecurity experts. OpenAI said that it will retain and consult with other safety, security and technical experts in the field.
Altman has pushed for the company to take measures to enhance user security. At a webinar in early May, Altman said that the authentication practices for AI need to be implemented to protect users from potential fraud.