Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Is AI responsible for a teen’s suicide?
Moments before Sewell Setzer III took his own life in February 2024, he was messaging with an AI chatbot. Setzer, a 14-year-old boy from Florida, had struck up an intimate and troubling relationship — if you can call it that — with an artificial intelligence application styled to simulate the personality of “Game of Thrones” character Daenerys Targaryen.
Setzer gave numerous indications to the chatbot, developed by a company called Character.AI, that he was actively suicidal. At no point did the chatbot break character, provide mental health support hotlines, or do anything to prevent the teen from harming himself, according to a wrongful death lawsuit filed by Setzer’s family last week. The company has since said that it has added protections to its app in the past six months, including a pop-up notification with the suicide hotline. But that’s a feature that’s been standard across search engines and social media platforms for years.
The lawsuit, filed in federal court in Orlando, also names Google as a defendant. The Big Tech company hired Character.AI’s leadership team and paid to license its technology in August, the latest in a spate of so-called acqui-hires in the AI industry. The lawsuit alleges that Google is a “co-creator” of Character.AI since its founders initially developed the technology while working there years earlier.
It’s unclear what legal liability Character.AI will have. Section 230 of the Communications Decency Act, which largely protects internet companies from civil suits, is untested when it comes to AI chatbots because it protects companies from speech posted by third parties. In the case of AI chatbots, the speech is directly from an AI company, so many experts have predicted that it won’t apply in cases like this.
Hollywood’s voices are coming to a chatbot near you
While the purpose of these voices is still unclear, Bloomberg suggested that they could power a Siri or Alexa-like personal assistant.
Meta has begun incorporating its generative AI model, Llama, into its apps: Facebook, Instagram, Messenger, and WhatsApp. But its negotiations with Hollywood talent could rouse new tensions between actors — especially voice actors — and AI companies. Just last year, actors and writers went on strike against Hollywood studios over their use of AI, and video game voice actors just started striking last week.