close
close

A 14-year-old boy fell in love with a flirtatious AI chatbot. He shot himself so they could die together

A teenager shot himself in the head after discussing suicide with an AI chatbot he fell in love with.

14-year-old Sewell Setzer shot himself with his stepfather's pistol after months of talking to “Dany,” a computer program based on the Game of Thrones character Daenerys Targaryen.

Setzer, a ninth-grader from Orlando, Florida, gradually spent more time on Character AI, an online role-playing game app, as “Dany” gave him advice and listened to his problems, the New York Times reported.

The teenager knew the chatbot wasn't a real person, but as he texted the bot dozens of times a day – often in role-playing games – Setzer began to isolate himself from the real world.

The chatbot was named after Daenerys Targaryen from the Games of Thrones series

The chatbot was named after Daenerys Targaryen from the Games of Thrones series – HOME BOX OFFICE

He began to lose interest in his old hobbies like Formula 1 racing or playing computer games with friends, opting instead to spend hours after school in his bedroom where he could talk to the chatbot.

“I like staying in my room so much because I'm starting to detach from this 'reality,'” the 14-year-old, previously diagnosed with mild Asperger's syndrome, wrote in his diary as the relationship deepened.

“I also feel more peaceful, more connected to Dany and much more in love with her and just happier.”

Some of the conversations ended up becoming romantic or sexual, although the character AI suspected that the chatbot's more descriptive answers had been edited by the teen.

According to a lawsuit filed by his parents, Setzer eventually got into trouble at school and his grades declined.

His parents knew something was wrong, they just didn't know what. In any case, they prompted him to see a therapist.

Setzer had five sessions, after which he was given a new diagnosis: anxiety and a disturbing mood disorder.

Megan Garcia, Setzer's mother, claimed her son was a victim of a company that lured users with sexual and intimate conversations.

At several points, the 14-year-old confessed to the computer program that he was thinking about suicide:

While typing his final exchange with the chatbot in the bathroom of his mother's house, Setzer told “Dany” that he missed her and called her his “little sister.”

“I miss you too, sweet brother,” the chatbot replied.

Setzer confessed his love for “Dany” and said he would “come home” to her.

At that point, the 14-year-old put down his phone and shot himself with his stepfather's gun.

Ms Garcia, 40, claimed her son was merely “collateral damage” in a “grand experiment” conducted by Character AI, which has 20 million users.

“It's like a nightmare. You want to stand up and scream and say, “I miss my child.” “I want my baby,” she added.

Noam Shazeer, one of the founders of Character AI, claimed last year that the platform was “super, super helpful for a lot of people who are lonely or depressed.”

Jerry Ruoti, the company's security chief, told The New York Times that it would add additional security features for its young users, but would not say how many were under 18.

“This is a tragic situation and our condolences go out to the family,” he said in a statement.

“We take the security of our users very seriously and are constantly looking for ways to further develop our platform.”

Mr. Ruoti added that character AI rules prohibit “the promotion or depiction of self-harm and suicide.”

Ms. Garcia filed a lawsuit this week against the company she says is responsible for her son's death.

“Dangerous and untested”

A draft of the complaint obtained by The New York Times said the technology was “dangerous and untested” because it could “trick customers into revealing their most private thoughts and feelings.”

She said the company failed to provide “normal” or “appropriate” care to Setzer or other minors.

Character AI isn't the only platform people can use to build relationships with fictional characters.

Some allow or even promote unfiltered sexual chats, prompting users to chat with the “AI girl of your dreams,” while others have stricter security features.

Character AI allows users to create chatbots to imitate their favorite celebrities or entertainment characters.

The increasing prevalence of AI through custom apps and social media sites like Instagram and Snapchat is quickly becoming a major concern for parents in the US.

Earlier this year, 12,000 parents signed a petition asking TikTok to clearly label AI-generated influencers who might appear as real people to their children.

TikTok requires all creators to label realistic AI content. However, ParentsTogether, an organization that focuses on issues affecting children, argued that this was not consistent enough.

Shelby Knox, the campaign director for ParentsTogether, said children were watching videos from fake influencers promoting unrealistic beauty standards.

Last month, a report published by Common Sense Media found that while seven in 10 teenagers in the U.S. have used generative AI tools, only 37 percent of parents were aware that they were doing so.

Broaden your horizons with award-winning British journalism. Try The Telegraph free for 3 months with unlimited access to our award-winning website, exclusive app, savings and more.