Sam Altman Leaves OpenAI’s Safety Committee but Accountability Still Questioned

CEO of OpenAI, Sam Altman, has stepped down from the oversight committee created to ensure the platform's safety.

CEO of OpenAI, Sam Altman has stepped down from the oversight committee created to look at company’s creations from a safety angle.

The committee will now be “independent” says the company though it consists of existing members of OpenAI’s board of directors.

It’s been a rollercoaster in the OpenAI C-Suite over the past year with the dethroning and then reinstatement of Altman; all the while facing increasing scrutiny over the company’s security and safety frameworks.

What Does the OpenAI Safety Committee Do?

The group is led by Chair Zico Kolter, who is a Carnegie Mellon professor but also includes ex-Sony EVP Nicole Seligman, Quora CEO Adam D’Angelo and retired U.S. Army General Paul Nakasone. Key members of the OpenAI team were also onboard including the cryptically titled Head of Preparedness, Aleksander Madry, Lilian Weng, who is Head of Safety Systems and Head of Security, Matt Knight.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2024 👨‍💻
See the list button

The committee was set up in May and is responsible for “making recommendations on critical safety and security decisions for all OpenAI projects”. Its first task was to look at the company’s “processes and safeguards” and it gave itself a deadline of 90 days for this.

Safety Committee Independence Questioned

However, Altman’s position on the committee draw criticism from both US senators; and, more worryingly, from former employees. Some ex-staffers went public and called Altman out for failing to take AI risks seriously. Therefore his involvement in the committee undermined its power, they suggested.

In August, news hit of a mass exodus from the company’s safety team – the so-called Superalignment team. One former employee, Daniel Kokotajlo, shared his concerns with Fortune magazine. He said that safety has been sidelined since Altman returned to the top seat and three members of the board who had focused on this left. “People who are primarily focused on thinking about AGI safety and preparedness are being increasingly marginalized,” he said.

Indeed, more ex-employees went public with a scathing letter. They wrote: “We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing. But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems.”

Government Intervention in OpenAI Safety

In July, before this damning letter was published, Senator Angus King and four of his colleagues wrote their own letter, urging “OpenAI to honor its “public promises and mission” regarding essential safety standards”.

TechCrunch writes that Altman and team have obviously realized that they need to win over federal support and “has dramatically increased its expenditures on federal lobbying, budgeting $800,000 for the first six months of 2024 versus $260,000 for all of last year”.

OpenAI says in a blog post that the newly restructured committee “… will continue to receive regular reports on technical assessments for current and future models, as well as reports of ongoing post-release monitoring,” OpenAI wrote in the post. “[W]e are building upon our model launch processes and practices to establish an integrated safety and security framework with clearly defined success criteria for model launches.”

It has been involved already in a safety review of OpenAI’s much anticipated and newly launched Strawberry release. The committee will be looking at “valid criticisms” as well as individual products. But this leaves the question hanging as to who will decide what is valid; and what is not, especially as the push for AI safety regulation gains momentum.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Katie has been a journalist for more than twenty years. At 18 years old, she started her career at the world's oldest photography magazine before joining the launch team at Wired magazine as News Editor. After a spell in Hong Kong writing for Cathay Pacific's inflight magazine about the Asian startup scene, she is now back in the UK. Writing from Sussex, she covers everything from nature restoration to data science for a beautiful array of magazines and websites.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today