Character.AI and Google sued after chatbot-obsessed teen’s death

Character.AI and Google sued after chatbot-obsessed teen’s death

A lawsuit has been filed against Character.AI, its founders Noam Shazeer and Daniel De Freitas, and Google in the wake of a teenager’s death, alleging wrongful death, negligence, deceptive trade practices, and product liability. Filed by the teen’s mother, Megan Garcia, it claims the platform for custom AI chatbots was “unreasonably dangerous” and lacked safety guardrails while being marketed to children.

As outlined in the lawsuit, 14-year-old Sewell Setzer III began using Character.AI last year, interacting with chatbots modeled after characters from The Game of Thrones, including Daenerys Targaryen. Setzer, who chatted with the bots continuously in the months before his death, died by suicide on February 28th, 2024, “seconds” after his last interaction with the bot.

Accusations include the site “anthropomorphizing” AI characters and that the platform’s chatbots offer “psychotherapy without a license.” Character.AI houses mental health-focused chatbots like “Therapist” and “Are You Feeling Lonely,” which Setzer interacted with.

Garcia’s lawyers quote Shazeer saying in an interview that he and De Freitas left Google to start his own company because “there’s just too much brand risk in large companies to ever launch anything fun” and that he wanted to “maximally accelerate” the tech. It says they left after the company decided against launching the Meena LLM they’d built. Google acquired the Character.AI leadership team in August.

Character.AI’s website and mobile app has hundreds of custom AI chatbots, many modeled after popular characters from TV shows, movies, and video games. A few months ago, The Verge wrote about the millions of young people, including teens, who make up the bulk of its user base, interacting with bots that might pretend to be Harry Styles or a therapist. Another recent report from Wired highlighted issues with Character.AI’s custom chatbots impersonating real people without their consent, including one posing as a teen who was murdered in 2006.

See also  FCC and the broadband industry argue net neutrality’s future

Because of the way chatbots like Character.ai generate output that depends on what the user inputs, they fall into an uncanny valley of thorny questions about user-generated content and liability that, so far, lacks clear answers.

Character.AI has now announced several changes to the platform, with communications head Chelsea Harrison saying in an email to The Verge, “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.”

Some of the changes include:

“As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation,” Harrison said. Google didn’t immediately respond to The Verge’s request for comment.

Source link

Technology