Study Finds ChatGPT Is Addictive

MIT and OpenAI researchers have a stark warning on chatbot over-use and mental health.

Researchers from OpenAI and MIT Media Lab have found that there is a worrying trend among ChatGPT users towards addiction.

The chatbot is now a frontline soldier in the battle for AI supremacy between US and Chinese firms; and as such, OpenAI wants as many users as possible.

However with this research, it suggests that over-use could potentially be damaging.

Sweeping Studies

The findings come from a joint research project, which actually combined two different studies.

The first was carried out by the OpenAI team and this was what it is calling a “large-scale, automated analysis of nearly 40 million ChatGPT interactions.”

OpenAI emphasizes that this was done “without human involvement in order to ensure user privacy.” It was designed to glean users’ sentiments towards the chatbot, including whether they viewed it “as a friend.”

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2025 👨‍💻
See the list button

The second study was run by the MIT Media Lab team and it conducted a Randomized Controlled Trial (RCT) with nearly 1,000 participants using ChatGPT over four weeks.

The researchers focused on whether “types of usage might affect users’ self-reported psychosocial states, focusing on loneliness, social interactions with real people, emotional dependence on the AI chatbot and problematic use of AI.”

Dependency and Addiction

What the team found was that there is a subset of ChatGPT users who have become emotionally reliant on the chatbot.

It also found that, contrary perhaps to pre-conception, “non-personal conversations tended to increase emotional dependence, especially with heavy usage.” Personal conversations instead led to lower emotional dependence but higher levels of loneliness.

There was also an unexpected difference between how people react to text-based ChatGPT than the Advanced Voice Mode. Users tended to use more emotional language with the text-based chatbot whereas “voice modes were associated with better well-being when used briefly,” the summary explained.

No one will be surprised to find out that the team found a correlation between extended daily use and “worse outcomes.”

Fine for The Majority

The research revealed that the vast majority of people surveyed didn’t engage emotionally with ChatGPT.

But the findings hold a warning that for some people, there is a growing reliance – and even emotional need – to engage with the chatbot.

The MIT research found that there is a profile of user, who was more likely to suffer these bad effects. “People who had a stronger tendency for attachment in relationships and those who viewed the AI as a friend that could fit in their personal life were more likely to experience negative effects from chatbot use,” the researchers explain.

This means that people who are already perhaps lonely or unhappy are vulnerable. As Futurism writes: “…the neediest people are developing the deepest parasocial relationship with AI — and where that leads could end up being sad, scary, or somewhere entirely unpredictable.”

OpenAI says that the research will help the company “…lead on the determination of responsible AI standards, promote transparency, and ensure that our innovation prioritizes user well-being.”

It adds: “We are focused on building AI that maximizes user benefit while minimizing potential harms, especially around well-being and overreliance.”

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Katie has been a journalist for more than twenty years. At 18 years old, she started her career at the world's oldest photography magazine before joining the launch team at Wired magazine as News Editor. After a spell in Hong Kong writing for Cathay Pacific's inflight magazine about the Asian startup scene, she is now back in the UK. Writing from Sussex, she covers everything from nature restoration to data science for a beautiful array of magazines and websites.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today