OpenAI Introduces New Safety Policies for Underage ChatGPT Users Amid Rising Concerns

OpenAI CEO Sam Altman has announced new safeguards for ChatGPT aimed at protecting users under the age of 18. The updated policies restrict chatbot engagement in sexual or flirtatious conversations with minors and place stricter guardrails around discussions of self-harm. In severe cases involving suicidal ideation, the system may notify parents or even contact local authorities.

The changes follow growing scrutiny of consumer chatbots after tragic cases, including a wrongful death lawsuit filed against OpenAI. Parents registering child accounts will also gain the ability to set “blackout hours” to limit use.

The announcement coincides with a U.S. Senate Judiciary Committee hearing on the risks of AI chatbots, where lawmakers are weighing child safety and privacy concerns as the technology becomes more embedded in daily life.

James Dargan

James Dargan is a writer and researcher at The AI Insider. His focus is on the AI startup ecosystem and he writes articles on the space that have a tone accessible to the average reader.

Share this article:

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape