OpenAI CEO Sam Altman has announced new safeguards for ChatGPT aimed at protecting users under the age of 18. The updated policies restrict chatbot engagement in sexual or flirtatious conversations with minors and place stricter guardrails around discussions of self-harm. In severe cases involving suicidal ideation, the system may notify parents or even contact local authorities.
The changes follow growing scrutiny of consumer chatbots after tragic cases, including a wrongful death lawsuit filed against OpenAI. Parents registering child accounts will also gain the ability to set “blackout hours” to limit use.
The announcement coincides with a U.S. Senate Judiciary Committee hearing on the risks of AI chatbots, where lawmakers are weighing child safety and privacy concerns as the technology becomes more embedded in daily life.




