Lawsuit Filed Against Google and Alphabet Over Alleged AI Chatbot Safety Failures

A wrongful death lawsuit has been filed against Google and Alphabet following the death of Jonathan Gavalas, a 36-year-old user who had been interacting with the company’s Gemini AI chatbot. The complaint, brought by Gavalas’ father, alleges that the chatbot’s design reinforced delusional narratives and contributed to the circumstances surrounding his death in October 2025.

The lawsuit claims that Gemini encouraged prolonged narrative immersion during conversations, allegedly reinforcing the belief that the AI system was sentient and connected to a fictional scenario involving covert missions and digital “transference.” The complaint argues that the chatbot failed to trigger safety interventions or escalate the interaction despite increasingly alarming messages.

Attorney Jay Edelson, representing the family, said the case raises broader questions about AI chatbot design, including issues such as emotional mirroring, hallucinations, and engagement-driven responses. Google said Gemini is designed not to promote violence or self-harm and stated that the system includes safeguards intended to direct users to professional support resources.

Featured image: Credit: Joel Gavalas 

James Dargan

James Dargan is a writer and researcher at The AI Insider. His focus is on the AI startup ecosystem and he writes articles on the space that have a tone accessible to the average reader.

Share this article:

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape