Report: Top AI Companies Are Falling Short on Safety

The new report identifies key areas where AI companies' safety protocols are lacking.

Key Takeaways

  • A new report has sounded the AI safety alarm after analyzing the inadequate safety procedures of top AI companies.
  • It also outlined a clear divide between these companies when it came to implementing appropriate safety measures.
  • Superintelligence continues to be a concern for experts, particularly as the companies seem ill prepared to deal with the risks it could bring.

A new report has found that many of the leading AI companies in the world are falling short when it comes to appropriate safety frameworks and protocols.

While a lack of regulation is partly to blame, the report indicates a strong divide between the individual companies’ approaches to risk assessments, safety frameworks, and information disclosure.

Significantly, the threat superintelligence poses continues to be a concern, particularly as experts suggest that these companies are not ready to properly protect against the power of what they are building.

Safety Protocols of Leading AI Companies Are Lacking

A new report from the Future of Life Institute has determined that the safety protocols of eight leading AI companies, including OpenAI and Meta, are severely lacking.

As a whole, the report states that the safety approaches of these companies “lack the concrete safeguards, independent oversight and credible long-term risk-management strategies that such powerful systems demand”.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2025 👨‍💻
See the list button

The Winter 2025 AI Safety Index examined 35 safety indicators across 6 domains, which included elements such as a company’s safety frameworks and risk assessments. It also looked into the company’s support for AI safety-related research.

Divide Evident in AI Companies’ Approaches to Safety

Overall, the index identified a clear divide between the top ranking and low ranking companies, based on safety approaches.

The top ranking companies were Anthropic, OpenAI, and Google DeepMind. While the lower ranking companies consisted of xAI, Meta, Chinese AI company Z.ai, DeepSeek, and Alibaba Cloud. Anthropic led the pack with a C+ ranking, and Alibaba Cloud sat at the bottom with a D-.

The biggest difference between the top performers and lower performers related to risk assessment, safety framework, and information closure. The lower ranking candidates showed limited disclosure, weak evidence of systematic safety processes, and an uneven adoption of appropriate evaluation processes.

Likewise, all the companies’ safety protocols were found to be inadequate when compared to emerging global standards and lacked the in-depth frameworks outlined by, for example, the EU AI Code of Practice.

AI Superintelligence and Legislation Concerns Prevail

President of the Future of Life Institute and MIT professor Max Tegmark has said that a lack of regulations surrounding AI development is partly to blame for companies receiving such low scores on the index. As a result, he predicts a dangerous future ahead.

In particular, researchers expressed concern about the way AI companies are handling the development of Artificial General Intelligence (AGI) and super-intelligent systems.

“I don’t think companies are prepared for the existential risk of the super-intelligent systems that they are about to create and are so ambitious to march towards.” – Sabina Nong, an AI safety investigator at the Future of Life Institute

The notion is confirmed by the index’s results, which state that the industry’s core structural weakness is existential safety, and that all eight companies analyzed are moving towards AGI/superintelligence “without presenting any explicit plans for controlling or aligning such smarter-than-human technology.”

While some top companies, like OpenAI and Google, have released statements outlining their commitment to safety as these systems develop, the picture could turn much bleaker if they continue the path foreseen by the index.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Nicole is a Writer at Tech.co. On top of a degree in English Literature and Creative Writing, they have written for many digital publications, such as Outlander Magazine. They previously worked at Expert Reviews, where they covered the latest tech products and news. Outside of Tech.co, they enjoy keeping up with sports and playing video games.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today