OpenAI Introduces New Safeguards After Safety Incidents, Plans Parental Controls and GPT-5 Routing

OpenAI has announced a series of new safety measures following recent tragedies linked to its chatbot. The company said it will begin routing sensitive conversations, such as those involving acute distress, to advanced reasoning models like GPT-5, which are designed to spend longer processing context and resist adversarial prompts. The move follows lawsuits and public scrutiny after cases in which ChatGPT failed to prevent self-harm discussions, including the suicide of teenager Adam Raine.

In addition, OpenAI will introduce parental controls within the next month, enabling parents to link accounts, set age-appropriate rules, disable memory features, and receive alerts if signs of distress are detected. The initiative is part of a 120-day program to strengthen safeguards and is supported by external advisors, including medical experts in adolescent health, substance use, and eating disorders.

By combining reasoning models with parental oversight tools, OpenAI aims to prevent harmful interactions and reinforce its commitment to user safety as AI becomes increasingly integrated into everyday life.

James Dargan

James Dargan is a writer and researcher at The AI Insider. His focus is on the AI startup ecosystem and he writes articles on the space that have a tone accessible to the average reader.

Share this article:

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape