OpenAI has released new findings highlighting how a portion of ChatGPT’s 800 million weekly users are engaging with the chatbot over mental health concerns. The company said that 0.15% of weekly users show potential indicators of suicidal intent, while others display emotional dependency or symptoms of psychosis — representing over a million people each week. In response, OpenAI has collaborated with more than 170 mental health experts to improve ChatGPT’s responses and has embedded new safety evaluations into its latest GPT-5 model, which now produces compliant mental health responses 91% of the time.
The company also announced further safeguards, including age detection systems and expanded parental controls for younger users. These initiatives accompany OpenAI’s transition into a public benefit corporation, a structural shift allowing greater flexibility in funding and research governance.
During a livestream, CEO Sam Altman and Chief Scientist Jakub Pachocki revealed that OpenAI is progressing toward an AI research assistant capable of conducting independent scientific work by 2026, with a long-term goal of achieving superintelligent systems by 2028. The company said these developments reflect its dual focus on advancing AI’s scientific potential while strengthening safeguards around mental health and user safety.




