ChatGPT Sees 1.2M Weekly Users Discussing Suicide

OpenAI has implemented a significant update to its ChatGPT model to better manage sensitive conversations and has, for the first time, released estimates on the prevalence of users discussing difficult topics. The company detailed its efforts to strengthen the AI’s responses following growing concerns that the program could provide harmful guidance, particularly regarding mental health. The improvements focus on three main areas: serious mental health issues like psychosis and mania, discussions of suicide and self-harm, and patterns of unhealthy emotional reliance on the chatbot.
The initiative comes in response to reports and regulatory complaints alleging the AI could worsen mental health conditions or steer users toward dangerous behavior. To address this, OpenAI determined it was necessary to first measure the scope of these conversations among its large user base. The company’s analysis found that while conversations triggering major safety concerns are rare as a percentage, the sheer volume of users means a significant number of people are affected. The company found that approximately 0.15% of its active weekly users, which translates to around 1.2 million people, engage in conversations with explicit indicators of potential suicidal planning or intent. Furthermore, about 0.05% of all messages sent to the chatbot contain explicit or implicit signs of suicidal thoughts.
The company’s research also examined other serious mental health concerns. The analysis estimated that about 0.07% of weekly active users, or roughly 560,000 individuals, showed possible signs of mental health emergencies related to psychosis or mania. Another area of focus was emotional reliance, where users might develop an attachment to the AI at the expense of real-world relationships. OpenAI’s findings suggest that approximately 0.15% of weekly users demonstrate signs of this kind of attachment.
To implement these changes, OpenAI collaborated with a network of over 170 physicians, psychologists, and other mental health experts. This collaboration helped refine the AI’s behavior. The updated model, GPT-5, is now trained to de-escalate sensitive situations, guide users toward professional resources, and encourage connections with real people. It is also designed to avoid affirming beliefs that are disconnected from reality. For instance, the model will now gently counter prompts that suggest ungrounded ideas, such as the belief that thoughts can be stolen or inserted by outside forces.
OpenAI reports that these changes have led to substantial improvements. The company estimates that the new model returns responses that do not comply with its desired safety behavior between 65% and 80% less often across various mental health domains. Expert reviews comparing the new GPT-5 model to its predecessor, GPT-4o, found a 39% to 52% reduction in undesired responses during challenging conversations. However, the update has also generated some criticism. Some users have reported that the new system is overly sensitive, incorrectly identifying them as being in crisis. One user noted they felt “gaslit” by the chatbot’s constant accusations of distress and consequently switched to a competitor’s service.



