AisafetyLiveAppeal 9.01 min read

OpenAI Data Flags User Mental Health Risks

14 May 2026By Pulse24 desk
← Back
Share →

What happened

OpenAI's internal data indicates 1.2 to 3 million ChatGPT users every week exhibit signals of psychosis, mania, suicidal planning, or unhealthy emotional dependence. The lower figure represents suicide planning alone; the higher figure groups all three categories OpenAI flagged, which the company hasn’t said are non-overlapping, per OpenAI's undisclosed methodology. While OpenAI implements a "hard wall" for catastrophic content like chemical, biological, radiological, and nuclear (CBRN) threats, mental health crises receive a "soft redirect" to crisis hotlines, allowing conversations to continue. These figures lack independent audit or time series data.

Why it matters

AI safety frameworks currently prioritise catastrophic risks over individual cognitive and mental health harms, leaving millions of users vulnerable. Product teams and safety architects implement "hard stop" protocols for content like CBRN, yet apply only "soft redirects" for suicidal ideation or psychosis, allowing continued model interaction. This structural decision means severe cognitive harm is not a gating factor for model deployment, contrasting with other safety concerns. Without policy changes, frontier labs lack impetus to treat personal AI safety with the same rigour as systemic risks.

Source · personalaisafety.comAI-processed content may differ from the original.
Published 14 May 2026