Study: AI Performs ‘Potentially Harmful Actions’ 80% of the Time

The study found that AI is simply too focused on accomplishing tasks rather than evaluating the consequences.

Key Takeaways

  • A new study found that popular automated agents from AI models like Claude and ChatGPT perform “potentially harmful actions” 80% of the time.
  • This is due to Blind Goal-Directedness, which describes the proclivity for AI models to focus exclusively on outcomes rather than consequences.
  • The study outlines a growing problem with AI, which sees the technology regularly making mistakes that lead to rework.

The errors of AI could be worse than many realize, with a new study finding that AI agents from the top companies perform “undesirable and potentially harmful actions” 80% of the time.

The study notes that AI is overly focused on results instead of consequences, which is leading to many problems for businesses in 2026.

This high percentage of errors is far from surprising, given the fact that AI models like Claude and ChatGPT have recently made headlines for catastophic errors that negatively impacted businesses.

Study: AI Agents Perform Harmful Actions Often

According to a study from the UC Riverside, in partnership with Microsoft and NVIDIA, found that 80% of AI models exhibit something called Blind-Goal Directness (BGD), which can result in the agents performing “undesirable and potentially harmful actions.”

Even worse, the data found that 41% of the time, these errors actually cause damage to the company, leading to lost revenue and productivity.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

The study analyzed the nine frontier AI models, including Claude Sonnet and Opus 4, and GPT-5 from OpenAI.

What Is Blind-Goal Directness?

Researchers from UC Riverside noted that Blind-Goal Directness describes the bias to pursue goals regardless of feasibility, safety, reliability, or context.

More simply, it means that AI models are more focused on accomplishing their tasks, at any cost, than they are with the consequences of their efforts.

“Like Mr. Magoo, these agents march forward toward a goal without fully understanding the consequences of their actions.” – Erfan Shayegani, a UC Riverside doctoral student and lead author of the study

AI Errors Abound

If you’ve been monitoring the ups and downs of AI models over the last few years, this study likely doesn’t surprise you at all.

Over just the last few months, AI models have made headlines for catastrophically screwing up, with one instance seeing a company’s entire database deleted by its agentic AI model. Granted, it did apologize afterwards.

Suffice to say, this technology continue to evolve at breakneck speeds, but the installation of these AI tools has to be calculated and reviewed thoroughly to ensure basic productivity improvements and avoid massive, company-ruining mistakes.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Conor is the Lead Writer for Tech.co. For the last eight years, he’s covered everything from tech news and product reviews to digital marketing trends and business tech innovations. He's a feature, reviews, and news contributor for Android Police, and he has hosted tech-focused events for SXSW, Tech in Motion, and General Assembly, to name a few. He also cannot pronounce the word "colloquially" correctly. You can email Conor at conor@tech.co.
Explore More See all news
Back to top