Why AI Therapy Chatbots Are the Ultimate Ethical Dilemma

We spoke to experts including therapeutic professionals to understand the mental health minefield of AI therapy chatbots.

Generative AI chatbots like ChatGPT are being used to streamline workplace processes, write code, and even write scripts for stand-up comedy sets. In the US, where cases of anxiety and depression are reaching record highs, and professional support remains stretched, it’s no surprise users are also utilizing the technology to help manage their mental health.

AI chatbots like Elomia, Pi, and Woebot offer affordable, personalized support 24/7, based on tried and tested therapy techniques. From helping users skirt long wait lists to reducing the stigma around accessing help, their benefits are huge. Can generative AI – which is still prone to mistakes, hallucinations, and bias – ever compare to the empathy of the human ear?

We spoke to academics, psychologists, and practicing therapists to shed light on how the emerging tech is likely to shape the future of therapy. We also discuss the shadier side of AI therapy and pinpoint which ethical concerns need to be addressed today. Here’s what we found.

AI Chatbots: The New Generation of Robot Therapists

Since OpenAI’s chatbot ChatGPT first launched last November, an army of chatbot copycats have cropped up on app stores – from general use apps like Google Bard and Claude AI to less conventional Marvel character simulators.

This burgeoning market has also led to the rise of AI therapy chatbots – apps that use generative AI to mimic advice and guidance offered by qualified therapists and mental health professionals.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2024 👨‍💻
See the list button

Major examples include Pi a self-proclaimed “friendly chat companion” that acts as a sounding board and offers emotional support; Replika, a chatbot program that mimics the speaking style and personality of its users; and Earkick, an anxiety tracker designed to help you log and work through difficult emotions.

“AI-based platforms offer privacy, reducing the stigma often associated with seeking help for mental health concerns.” – Ryan Sultan, Certified Psychiatrist and Professor at Columbia University

Like the best mental health apps, many of these apps already boast six-figure downloads – and their popularity is hardly surprising. Due to a dearth of qualified psychologists, psychiatrists, and social workers, plus a spike in demand for services across the country, the US is currently grappling with one of its worst mental health crises on record.

The high cost of traditional therapy and taboos around mental health are also deterring many from pursuing conventional options, creating a gap for more discreet, affordable alternatives.

Early evidence suggests they’re effective, too. AI Chatbot’s Earkick’s own findings revealed that its user’s mood improved by 34% and anxiety levels dropped by 32% after sticking with the app for five months. However, as successful as these apps might be in quelling minor anxieties, they definitely aren’t the mental health panacea we’ve all been waiting for.

Pi AI therapy app, screenshot

AI Chatbots Unsuitable for Serious Mental Health Conditions

Receiving sensible, therapist-approved advice from a pocket-sized chatbot will undoubtedly be helpful for many. However, those suffering from severe mental health conditions like depression, schizophrenia, and bipolar disorder will likely be left short-changed.

Mental disorders are complex, heavily nuanced, and unique to each person they affect. In some cases, they are best managed with medication that can only be correctly prescribed by qualified medical professionals. While large language models (LLMs) have evolved leaps and bounds in recent years, their output will never be able to replace the clinical expertise, empathy, and compassion provided by psychiatrists, psychologists, and therapists.

“I appreciate the algorithms have improved, but ultimately I don’t think they are going to address the messier social realities that people are in when they’re seeking help,” – Julia Brown, anthropology professor at the University of California, San Francisco

Strategic psychic healer Aanant Bisht also believes that AI, in its current state, will be incapable of aiding every user’s complex healing journey. “It’s also еssеntial to avoid ovеrsimplification of complеx mеntal hеalth issuеs and еncouragе usеrs to sееk profеssional hеlp whеn rеquirеd,” Bisht tells Tech.co.

Another common concern expressed by experts we spoke to is medical misdiagnosis. The mental health community has a long-standing issue with correctly identifying complex and severe psychiatric disorders, and heavy reliance on AI could likely exacerbate this problem, due to the technologies limited, unrepresentative data sets and inability to interpret human nuances.

The results of this can be stark, too. Mental health misdiagnosis often leads to inadequate treatment and support, additional mental pressures, and long-term skepticism towards the medical system. This isn’t the only reason why artificial intelligence isn’t fit to overtake the therapist’s chair in its current form.

AI is Flawed and Shrouded in Ethical Challenges

The rapid rise of generative AI hasn’t come without consequences. As the technology continues to develop at a breakneck speed, regulation around its use has been slow to catch up, contributing to a string of ethical challenges relating to data privacy, embedded bias, and misuse.

These concerns aren’t unique to therapy, but the sensitive nature of mental health means that ethical frameworks are at the heart of any good therapeutic relationship. With an official AI ethics code currently non-existent, users relying on chatbots instead of qualified professionals for counselling or other mental health support is highly problematic.

Data privacy is one major issue. Chatbots like ChatGPT have frequently landed themselves in hot water for failing to protect user data. In order for users to feel comfortable discussing private, personal information, AI companies will need to have a foolproof data protection strategy that prioritizes confidentiality.

“The intimate and sensitive nature of therapeutic conversations demands an unparalleled level of data protection.” – Dr. Langham, psychologist at Impulse Therapy

Another important factor to be aware of is machine learning bias. All AI systems rely on predetermined training data, which often include embedded human biases even if sensitive variables like race and gender are removed.

According to Bayu Prihandito, CEO of life coach company Life Architekture, if AI is trained on biased data, “it might perpetuate or exacerbate existing biases, leading to inequitable treatment recommendations”. These built-in prejudices are likely to impact people from minority groups more too, highlighting the importance of human oversight and diverse and representative training datasets.

AI as a Therapists’ Assistant, Not Replacement

Friendly companions like Pi and Woebot will only grow in popularity as users continue to seek accessible ways to supplement their mental well-being. However, due to a wide range of ethical concerns, artificial intelligence isn’t ready to replace the role of traditional therapy, nor should it ever.

This isn’t to say the emerging technology won’t have a massive impact on the practice as a whole, though. Most of the practitioners we spoke to believe that by carrying out the grunt work such as triaging, AI tools will be able to give them more time and energy to put into other areas of the practice.

“Whilе it cannot rеplacе thе dееp human connеction that thеrapists offеr, it can sеrvе as a complеmеntary tool.” – Aanant Bish, business psychic coach

“AI could handle initial assessments, ongoing monitoring, and provide support for less complex cases, allowing us to focus on more severe or delicate situations,” Bayu Prihandito told Tech.co, adding “It’s like having an extra set of hands that are always available.”

Columbia professor and certified psychiatrist Sultan agrees, telling Tech.co that in five years time, AI will likely complement traditional therapy by streamlining administrative tasks, helping practitioners to create more personalized treatment plans, and creating hybrid models that combine human expertise with AI-driven tools to enhance treatment”.

This suggests that far from draining the practice of its humanity, AI actually has the power to free up human skills like empathy, connection, and compassion, helping clients get even more out of the service.

However, caring for mental health is serious stuff and can often be a matter of life and death. Before AI becomes every therapist’s assistant, or indeed replaces them, strict regulations around its deployment and use need to be considered.

Until such a time arrives, the clear potential for AI chatbots to improve access to mental health support can’t be fully reconciled with the risks they pose. In a world where ethical dilemmas are often par for the course, this one is simply too big to ignore.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Isobel O'Sullivan (BSc) is a senior writer at Tech.co with over four years of experience covering business and technology news. Since studying Digital Anthropology at University College London (UCL), she’s been a regular contributor to Market Finance’s blog and has also worked as a freelance tech researcher. Isobel’s always up to date with the topics in employment and data security and has a specialist focus on POS and VoIP systems.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today