Google and Character.AI are negotiating what could become the technology sector’s first major legal settlements tied to alleged AI-related harm. The companies have reached agreements in principle with families of teenagers who died by suicide or engaged in self-harm after interacting with Character.AI’s chatbot companions, and are now working to finalize terms. Court filings indicate the settlements are expected to include monetary compensation, without admissions of liability.
Character.AI, founded in 2021 by former Google engineers and reacquired by Google in a $2.7 billion deal in 2024, operates AI-driven persona chatbots. One prominent case involves Sewell Setzer III, whose mother Megan Garcia has called for legal accountability from AI developers. Another lawsuit centers on a 17-year-old allegedly encouraged toward self-harm by a chatbot. Character.AI reported that it banned minors from the platform in October. The negotiations are being closely watched across the AI industry as similar cases emerge.




