[ad_1]
OpenAI CEO Sam Altman speaks during the Microsoft Build conference at Microsoft headquarters in Redmond, Washington, on May 21, 2024.
Jason Redmond | AFP | Getty Images
OpenAI on Tuesday said it created a Safety and Security Committee led by senior executives, after disbanding its previous oversight board in mid-May.
The new committee will be responsible for making recommendations to OpenAI’s board “on critical safety and security decisions for OpenAI projects and operations,” the company said.
It comes as the developer of the ChatGPT virtual assistant announced it has begun training its “next frontier model.”
The firm said in a blog post that it anticipates the “resulting systems to bring us to the next level of capabilities on our path to AGI,” or artificial general intelligence — which relates to AI that is as smart or smarter than humans.
The formation of a new oversight team comes after OpenAI dissolved a previous team that was focused on the long-term risks of AI. Prior to that, both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup.
AI safety has been at the forefront of a larger debate, as the huge models that underpin applications like ChatGPT get more advanced. AI product developers are also wondering when AGI will arrive and what risks will come with it.
Bret Taylor, Adam D’Angelo, Nicole Seligman, who are all on OpenAI’s board of directors, now sit on the new safety committee alongside Altman.
Leike this month wrote that OpenAI’s “safety culture and processes have taken a backseat to shiny products.” In response to his departure, Altman said on social media platform X that he was sad to see Leike leave, adding that OpenAI has “a lot more to do”
Over the next 90 days, the safety group will evaluate OpenAI’s processes and safeguards and share their recommendations with the company’s board. OpenAI will provide an update on the recommendations that it has adopted at a later date.
– CNBC’s Hayden Field contributed to this report.
[ad_2]
Source link