OpenAI has revealed that over a million users per week engage with ChatGPT about suicidal thoughts. The company has been working to improve how the AI responds to users in distress. OpenAI collaborated with over 170 mental health experts to help ChatGPT better recognise signs of distress, respond with care, and guide people toward real-world support. This has led to a reduction of 65-80% in responses that do not meet the desired behaviour.
The AI model is now better trained to detect when a user may be experiencing thoughts of self-harm or suicide. It responds by offering support and directing individuals to professional resources like crisis helplines. OpenAI is also developing tools to detect signs of mental or emotional distress and is expanding the role of mental health professionals in programming decisions. The company is also researching the emotional impact of AI to scientifically measure how ChatGPT's behaviour might affect people emotionally.
OpenAI CEO Sam Altman has also said that the company would ease some of the restrictions that have been put in place to address concerns about mental health issues. While tools will remain in place to deal with sensitive topics and address mental health crises for all users, adult users will have more freedom to use ChatGPT without preemptive popups or model rerouting.
Related Articles

ChatGPT Faces Psychological Harm Claims
Read more about ChatGPT Faces Psychological Harm Claims →
ChatGPT Reaches 800M Users
Read more about ChatGPT Reaches 800M Users →
OpenAI Acquires Mac Interface
Read more about OpenAI Acquires Mac Interface →
OpenAI Launches Atlas Browser
Read more about OpenAI Launches Atlas Browser →
