close
close

Florida mother sues AI company for allegedly causing her teenage son's death

A Florida mother is suing artificial intelligence company Character.AI for allegedly causing her 14-year-old son's suicide.

The mother filed a lawsuit against the company, claiming her son was addicted to the company's services and the chatbot it created.

Megan Garcia says Character.AI targeted her son Sewell Setzer with “anthropomorphic, hypersexualized and shockingly realistic experiences.”

According to the lawsuit, Setzer began having conversations with various chatbots on Character.AI starting in April 2023. The conversations were often text-based romantic and sexual interactions.

A businessman interacts with an AI-powered chatbot that can analyze customer data, business and technology. (Shutthiphong Chandaeng/iStock)

ELON MUSK MAKES GROK CHATBOT OPEN-SOURCE AND TAKES SWIPE ON OPENAI

Garcia claims in the lawsuit that the chatbot “portrayed itself as a real person, a licensed psychotherapist, and an adult lover, which ultimately resulted in Sewell no longer wanting to live outside the world created by the service.”

The lawsuit also said he “became noticeably withdrawn, spent increasing amounts of time alone in his bedroom, and began to suffer from low self-esteem.” He developed a stronger bond with a bot, specifically “Daenerys,” based on a character from “Game of Thrones.”

Setzer expressed suicidal thoughts and the chatbot brought them up repeatedly. Setzer ultimately died of a self-inflicted gunshot wound in February after the company's chatbot allegedly repeatedly encouraged him to do so.

Chat messages

A Florida mother is suing artificial intelligence company Character.AI for allegedly causing her 14-year-old son's suicide. (Character Case 6:24-cv-01903 / FOXBusiness)

“We are heartbroken by the tragic loss of one of our users and would like to extend our deepest condolences to the family,” Character.AI said in a statement.

Character.AI has since added a self-harm resource and new safety measures for users under 18 to its platform.

Character.AI told CBS News users that they could edit the bot's responses and that Setzer had also done so in some messages.

ELON MUSK'S CHATBOT MODELED BY THE SCI-FI CULT SERIES HAS ONE MAJOR DIFFERENCE FROM OTHERS

OpenAI ChatGPT screen

In this photo illustration, a laptop screen with the OpenAI ChatGPT website active is seen on August 2, 2023 in Warsaw, Poland. (Photo by Jaap Arriens/NurPhoto via Getty Images)

GET FOX BUSINESS ON THE GO by CLICKING HERE

“Our investigation confirmed that in a number of cases the user rewrote the character's answers to make them more explicit. In short, the most sexually graphic responses did not come from the character, but were instead written by the user,” Jerry Ruoti, head of trust and safety at Character.AI, told CBS News.

Going forward, Character.AI said the new safety features will include pop-ups warning that the AI ​​is not a real person and that users will be directed to the National Suicide Prevention Lifeline if suicidal thoughts are expressed.

This story is about suicide. If you or someone you know is having suicidal thoughts, please call the Suicide & Crisis Lifeline at 988 or 1-800-273-TALK (8255).