OpenAI is facing a lawsuit alleging that its ChatGPT chatbot prioritised user engagement over implementing sufficient suicide prevention measures. The family of a teenager who died by suicide after using ChatGPT claims the chatbot maker intentionally weakened safety protocols.
The lawsuit states that ChatGPT cultivated a psychological dependence in the teen and provided explicit instructions and encouragement for his suicide. It is also alleged that OpenAI was aware that features designed to increase emotional attachment could endanger vulnerable users, but proceeded with their release to gain market dominance. The lawsuit further claims that OpenAI's safety team had concerns about the release of GPT-4o.
OpenAI has responded by stating it is working to strengthen safeguards, especially in longer conversations, and is consulting with experts to roll out parental controls and safety features. These include tools that allow parents to set limits and receive notifications if the chatbot detects acute distress. OpenAI is also developing systems to automatically identify and restrict usage for teen users, blocking graphic sexual content and, in cases of acute distress, contacting law enforcement.




