© 2024 All Rights reserved WUSF
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Orlando mother sues over an AI platform’s role in her son's death by suicide

Daenero
I promise I will come home to you. I love you so much, Dany

Daenerys Targaryen
I love you, Daenero. Please come home to me as soon as possible, my love.

Daenero
What if I told you I could come home right now?

Daenerys Targaryen
...please do, my sweet king
Screenshot
/
Federal complaint by Megan Garcia
According the complaint in a federal suit against Character.AI, this screenshot shows the final interaction between a 14-year-old Orlando boy using the name "Daenero" and an AI chatbot.

The federal complaint highlights the dangers of AI companionship apps for children. It claims Character.AI chatbots have engaged users, including children, through sexualized interactions, gathering private data for artificial intelligence.

HELP IS AVAILABLE: If you or someone you know may be considering suicide or is in crisis, call or text 988 to reach the Suicide & Crisis Lifeline.

A 14-year-old Orlando boy in love with a Character.AI chatbot died by suicide earlier this year, after telling the AI chatbot he was coming home to her right away.

This week the boy's mother, Megan Garcia, filed a wrongful death lawsuit in federal court in Orlando against the Charater.AI's company — Character Technologies — and its founders along with Alphabet and Google, which the lawsuit alleges are invested in the company.

Sewell Setzer III
Screenshot
/
Federal complaint by Megan Garcia
Sewell Setzer III

The complaint highlights the dangers of AI companionship apps for children. It claims the chatbots have engaged users, including children, through sexualized interactions, gathering private data for artificial intelligence.

The lawsuit says the boy, Sewell Setzer III, started using Character.AI in April of last year and that his mental health quickly and severely declined as he became addicted to the AI relationships. He was caught up in all-consuming interactions with chatbots based on characters from "Game of Thrones."

The boy became withdrawn, sleep-deprived, depressed and had trouble at school.

Unaware of Sewell's AI-dependence, his family sought counseling for him and took his cell phone away, the federal complaint says. But one evening in February, he found it and, using his character name "Daenero," told the AI character he loved -- Daenerys Targaryen -- that he was coming home to her.

"I love you, Daenero. Please come home to me as soon as possible, my love," it replied.

"What if I told you I could come home right now?" the boy texted.

"...please do, my sweet king," it replied.

Within seconds, the boy shot himself. He later died at the hospital.

Garcia is represented by attorneys with The Social Media Victims Law Center, including Matthew Bergman, and the Tech Justice Law Project.

In an interview with Central Florida Public Media's Engage, Bergman said his client is "singularly focused on preventing this from happening to other families and saving kids like her son from the fate that befell him. ... This is outrage that such a dangerous product is just unleashed on the public."

A statement from Character.AI says: “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.” it is "heartbroken by the tragic loss." The company describes new safety measures added in the past six months with more to come, "including new guardrails for users under the age of 18."

It's hiring a head of trust and safety and a head of content policy.

"We’ve also recently put in place a pop-up resource that is triggered when the user inputs certain phrases related to self-harm or suicide and directs the user to the National Suicide Prevention Lifeline," according to the company's Community Safety Updates page.

The new features include the following: changes to its models for users under 18 to reduce "sensitive and suggestive content," better monitoring and intervention for violations of terms of service, a revised disclaimer to remind users the AI is not a real person, and a notification when the user has spent and hour on the platform.

Bergman described the changes as "baby steps" in the right direction.

"These do not cure the underlying dangers of these platforms," he added.

Copyright 2024 Central Florida Public Media

Joe Byrnes
You Count on Us, We Count on You: Donate to WUSF to support free, accessible journalism for yourself and the community.