Half of OpenAI’s Safety Team Quit as Concerns Over AGI Mount

OpenAI is remaining steadfast when it comes to developing AI superintelligence, despite the wishes of company insiders.

Almost half of OpenAI’s safety team has deserted the company in the past several months, according to whistleblower and former employee Daniel Kokotajlo.

The news comes just a week after several ex-staffers signed a letter calling out the company and its CEO, Sam Altman, for failing to take artificial intelligence risks seriously, specifically when it comes to the development of autonomous forms of AI like artificial general intelligence (AGI).

Altman has also been criticized for rejecting a California bill aimed at regulating larger AI models while being extremely vocal about the importance of regulation in the industry, as the AI chatbot maker and research lab continues to prioritize its AGI project over everything else.

OpenAI’s Safety Team Drop Like Flies Over AGI Concerns

OpenAI’s Superalignment team, which was established in 2023 to mitigate the risks of ‘superintelligent’ AI, is facing a major crisis.

According to former team member Daniel Kokotajlo’s exclusive interview for Fortune, around 14 team members have quit throughout the last several months, leaving a skeleton workforce of 16. The exodus has been a slow simmer rather than a rapid boil, reveals Kokotajlo, as OpenAI continues to prioritize profits over AI safety.

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2024 👨‍💻
See the list button

“It’s not been like a coordinated thing. I think it’s just people sort of individually giving up,” – Daniel Kokotajlo, ex-OpenAI staffer

While the Superalignment team is tasked with handling a wide range of AI risk factors, these resignations are likely to be aligned with Kokotajlo’s belief that OpenAI is “fairly close” to developing AGI – a highly autonomous AI system that will be able to outperform humans at most tasks.

OpenAI’s pursuit of AGI is laid bare in its mission statement. However, instead of benefiting “all of humanity” as its mission reads, company insiders like Kokotajlo believe that if the company continues at its current trajectory, the powerful technology could pose “a risk of critical harms to society” such as “unprecedented cyber attacks” or assisting in the “creation of biological weapons”.

What’s more, according to Kokotajlo, members of the Superalignment team that have attempted to publish research on the risks of AGI were faced with a “chilling effect” from OpenAI executives and an “increasing amount of influence by the communications and lobbying wings of OpenAI” over what is appropriate to publish.

Sam Altman Flip Flops on AI Regulation

In addition to voting with their feet, Daniel Kokotajlo, and fellow ex-researcher William Saunders signed an open letter last week publicly calling out the company for their security concerns

In the letter, they addressed the potential for AI models to cause “catastrophic harm to the public” if left unregulated. They also pointed out OpenAI’s shady track record when it comes to safety, including the company’s lack of whistleblower protections, its premature deployment of GPT-4, its lackluster security practices which resulted in a major security break in 2023, and the resignation of Ilya Sutskever, the companies former Chief of Security.

“Sam Altman, our former boss, has repeatedly called for AI regulation. Now, when actual regulation is on the table, he opposes it.” – Daniel Kokotajlo and William Saunders in open letter

Notably, the whisteblowers also drew attention to OpenAI CEO Sam Altman’s hypocrisy when it comes to AI regulation. They expressed “disappointment” that the company decided to lobby against SB 1047 – a proposed bill that mandates safety testing for AI models that cost over $100 million to develop – despite Alman’s repeated calls for greater AI regulation.

Kokotajlo and Saunders even compared its response to Anthropic, an AI chatbot competitor that decided to comply with the SB 1047 bill as they agreed it had a net benefit.

OpenAI Are Hellbent on AGI, Despite Potential Costs

According to OpenAI, its reasons for rejecting SB 1047, are two-fold. First, it claims that companies will leave California if the bill passes. Secondly, the company believes regulation should be “shaped and implemented at the federal level”, due to the national security implications of AI development.

These reasons have already been dismissed by Sauders and Kokotajlo for being unconstructive and in bad faith, and many ex-employees, including Ilya Sutskever, believe that Altman’s regulation rejection derives from putting “shiny products”, and the development of AGI “above safety”.

Despite a recent report by the Information revealing the ChatGPT creator may make losses of around $5 billion in 2024, and completely run out of money in the next year, OpenAI remains steadfast when it comes to developing AGI. Altman has previously claimed that it doesn’t matter how much the breakthrough will cost them, telling Stanford eCorner “Whether we burn $500 million, $5 billion, or $50 billion a year, I don’t care” as some breakthroughs for mankind are “priceless”.

With the company dismissing its own financial challenges as much as important calls for regulation, its unlikely whether anything with deter OpenAI from pursuing AGI – even as its Superintelligence team collapses. While the consequences of OpenAI’s AGI quest are yet to be determined, there are steps you can take to keep your data safe when using ChatGPT.

Learn how to stop ChatGPT training on your data, and find out how to use the chatbot safely here.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Isobel O'Sullivan (BSc) is a senior writer at Tech.co with over four years of experience covering business and technology news. Since studying Digital Anthropology at University College London (UCL), she’s been a regular contributor to Market Finance’s blog and has also worked as a freelance tech researcher. Isobel’s always up to date with the topics in employment and data security and has a specialist focus on POS and VoIP systems.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today