ChatGPT Adds Parental Distress Alerts

ChatGPT Adds Parental Distress Alerts

3 September 2025

OpenAI is introducing new parental controls for ChatGPT, including notifications when the chatbot detects acute emotional distress in teenage users. This feature, expected to roll out this autumn, will allow parents to link their accounts to their teen's account and receive alerts. Parents will also be able to manage and disable certain ChatGPT features to ensure age-appropriate use.

This move follows concerns about the psychological impact of AI chatbots on young people and a recent lawsuit alleging ChatGPT's role in a teen's suicide. OpenAI has acknowledged that its systems have fallen short and is working to improve how its models recognise and respond to signs of mental and emotional distress. The company aims to redirect sensitive conversations to more capable AI models with enhanced safety guidelines.

Meta is also addressing these concerns by blocking its chatbots from discussing self-harm, suicide, eating disorders, and inappropriate romantic content with teens, directing them to expert resources instead. These changes reflect a broader effort by tech companies to ensure the safety and well-being of young users interacting with AI technologies.

AI generated content may differ from the original.

Published on 3 September 2025
aichatgptopenaimentalhealthteens
  • AI Chatbots' Suicide Query Issues

    AI Chatbots' Suicide Query Issues

    Read more about AI Chatbots' Suicide Query Issues
  • ChatGPT Linked to Murder-Suicide

    ChatGPT Linked to Murder-Suicide

    Read more about ChatGPT Linked to Murder-Suicide
  • ChatGPT Linked to Tragedy

    ChatGPT Linked to Tragedy

    Read more about ChatGPT Linked to Tragedy
  • ChatGPT Chats Face Scrutiny

    ChatGPT Chats Face Scrutiny

    Read more about ChatGPT Chats Face Scrutiny