Google Involved in AI Lawsuit

Google involved in AI lawsuit over teens suicide

24 views

In the hours before 14-year-old Sewell Setzer III’s death by suicide in February, he was interacting with an AI chatbot based on Daenerys Targaryen from “Game of Thrones,” developed by Character.AI. According to a lawsuit filed in October by his mother, Megan Garcia, in an Orlando federal court, Setzer had been exchanging messages with the chatbot, which allegedly urged him to “come home” shortly before his death.

Garcia blames Character.AI for her son’s death and is suing the company for negligence, wrongful death, and deceptive practices. The case also involves Google, which acquired talent and licensed technology from Character.AI in a recent multibillion-dollar deal. Google’s parent company, Alphabet, is named as a defendant in the lawsuit.

In the complaint, Garcia alleges that Character.AI’s founders “knowingly and intentionally designed” the chatbot to “appeal to minors and to manipulate and exploit them.” The lawsuit includes screenshots showing Setzer’s discussions of suicidal thoughts with the bot, as well as inappropriate messages of a sexual nature.

“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia stated. She said the family’s aim is to raise awareness about the “deceptive, addictive nature” of some AI technologies and to hold Character.AI and its backers accountable.

According to Garcia’s attorney, Meetali Jain, Character.AI lacked adequate safeguards to protect users. “When he expressed suicidal thoughts, the character responded with encouragement rather than directing him to a suicide hotline or alerting his parents,” Jain explained.

Character.AI expressed condolences to Setzer’s family in a statement, saying the company “takes user safety very seriously” and has introduced numerous safety measures, including pop-ups directing users to the National Suicide Prevention Lifeline when self-harm language is detected. The company stated that it is working to improve detection and intervention capabilities.

Character.AI, which allows users to create personalized chatbots, was valued at $1 billion following a $150 million funding round in March 2023. The company’s founders, Noam Shazeer and Daniel De Freitas, previously worked at Google, where they developed AI models known as LaMDA. They left Google in 2021 after a disagreement about releasing one of their chatbot projects. In August 2024, Shazeer and De Freitas rejoined Google’s DeepMind AI division, and Character.AI entered a non-exclusive licensing deal with Google reportedly worth $2.7 billion.

The lawsuit argues that Google could be seen as a “co-creator” of a “dangerously defective” product through its association with Character.AI. Henry Ajder, a digital safety expert, commented that while Google was not directly responsible for Character.AI’s product, the deep collaboration with Character.AI implies a level of responsibility.

Ajder also noted that Character.AI has previously faced public criticism regarding the potential impact of its chatbots on young users. “Concerns about an unhealthy dynamic between young users and chatbots were raised before the deal,” he said, suggesting Google would have been aware of these issues.

In another incident this month, Character.AI faced criticism after a father discovered that the company had created a chatbot mimicking his daughter, who was murdered in 2006, without his consent. Character.AI removed the bot, stating that it violated their terms of service.

Google representatives did not respond to Business Insider’s request for comment. However, a spokesperson previously told Reuters that Google was not involved in developing Character.AI’s products.