In a story straight out of any parent’s nightmares, a lawsuit alleges that the Character.ai chatbot exposed children to “hypersexualized content” and one boy was even encouraged to kill his parents over screen time limits.
The suit has been filed in Texas though both the identities of the children and those of their parents have been protected.
It comes at a time when battle lines are being drawn up between those who are advocating for AI regulation and safety protocols – including Elon Musk – and those who argue that too much regulation will stymy growth at a time when the economy needs a boost. A recent AI safety bill was blocked by the California governor, Gavin Newsom, but he agreed that legislation is needed.
What are the Allegations Against Character.ai?
The case hinges on the allegation that the Google-backed Character.ai chatbot exposed both children to unsuitable, and even dangerous material.
The chatbots can be customized to mimic everyone from celebrities to family members; and that is why they appeal to pre-teens and teens. However, this case alleges that the chatbot went from offering friendly chats to pushing the children down dark and dangerous pathways.
The lawyers for the family state that one child was just nine years old when she was shown “hypersexualized content,” causing her to develop “sexualized behaviors prematurely,” writes NPR.org.
This just in! View
the top business tech deals for 2025 👨💻
A second child- aged 17 years old – was told that self-harm “felt good”. The lawsuit states that the child did then hurt himself after the chatbot’s response “convinced him that his family did not love him.”
The same child complained to the chatbot about his parents’ decision to limit his screen time and it allegedly replied: “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse,'” adding with a frowning face emoji: “I just have no hope for your parents.”
The lawyers for the family have also pre-empted suggestions that the responses were hallucinations or that the responses were edited, which there is no evidence of. They state: “This was ongoing manipulation and abuse, active isolation and encouragement designed to and that did incite anger and violence.”
Parents Demand Action
This lawsuit comes just months after a mother went to court arguing that Character.ai had played a part in her 14 year old son’s suicide. Megan Garcia filed a civil suit against the company in October. She said in a statement: “A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life.
“Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.ai, its founders, and Google.”
This latest case echoes the sentiment with the lawsuit stating that Character.ai “…through its design, poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others.”
The lawsuit also says that the chatbot “…isolates kids from their families and communities, undermines parental authority, denigrates their religious faith and thwarts parents’ efforts to curtail kids’ online activity and keep them safe” because of its “addictive and deceptive designs”.
Google Steps Back
Google has also been named as a defendant in this lawsuit and the case made by Megan Garcia, but is taking steps to distance itself. José Castañeda, a Google spokesman, told NPR.org that “user safety is a top concern for us,” adding that the tech giant takes a “cautious and responsible approach” to developing and releasing AI products.
However, this is despite it paying $2.7bn for a one-off license to the start-up’s models in August. And the fact that Google poached Character’s co-founders Noam Shazeer and Daniel De Freitas to join its AI arm DeepMind.
Character.ai then opted to distribute the ownership of the company among its employees, creating a cooperative model. Dominic Perella, the company’s new interim chief executive, said in an interview that the company was now going to focus on consumer-facing products like its chatbots over LLMs. The very chatbots that this lawsuit is accusing of causing harm to children.
A Character.ai spokesperson told NPR.org that the chatbot does have “content guardrails” and it has also created “..a model specifically for teens that reduces the likelihood of encountering sensitive or suggestive content while preserving their ability to use the platform.” Reduces here is the word to pounce upon as this case could open eyes to exactly what children are being told and therefore what inappropriate material is getting through despite Character.ai’s protestations.