Moments before Sewell Setzer III took his own life in February 2024, he was messaging with an AI chatbot. Setzer, a 14-year-old boy from Florida, had struck up an intimate and troubling relationship — if you can call it that — with an artificial intelligence application styled to simulate the personality of “Game of Thrones” character Daenerys Targaryen.
Setzer gave numerous indications to the chatbot, developed by a company called Character.AI, that he was actively suicidal. At no point did the chatbot break character, provide mental health support hotlines, or do anything to prevent the teen from harming himself, according to a wrongful death lawsuit filed by Setzer’s family last week. The company has since said that it has added protections to its app in the past six months, including a pop-up notification with the suicide hotline. But that’s a feature that’s been standard across search engines and social media platforms for years.
The lawsuit, filed in federal court in Orlando, also names Google as a defendant. The Big Tech company hired Character.AI’s leadership team and paid to license its technology in August, the latest in a spate of so-called acqui-hires in the AI industry. The lawsuit alleges that Google is a “co-creator” of Character.AI since its founders initially developed the technology while working there years earlier.
It’s unclear what legal liability Character.AI will have. Section 230 of the Communications Decency Act, which largely protects internet companies from civil suits, is untested when it comes to AI chatbots because it protects companies from speech posted by third parties. In the case of AI chatbots, the speech is directly from an AI company, so many experts have predicted that it won’t apply in cases like this.