ChatGPT Chats Face Scrutiny

ChatGPT Chats Face Scrutiny

2 September 2025

OpenAI has acknowledged that ChatGPT conversations indicating potential violence may be subject to human review and, in urgent situations, shared with law enforcement. This disclosure has sparked privacy concerns among users who believed their AI interactions were confidential. The process begins with automated systems flagging potential threats, which are then escalated to a review team. If reviewers determine an immediate risk, OpenAI may alert authorities.

While OpenAI offers support for self-harm ideation, it prioritises user privacy in such cases and does not alert law enforcement. Instead, ChatGPT is trained to direct individuals to professional help resources. However, concerns persist regarding how OpenAI determines user locations for emergency notifications and the potential for misuse, such as false reports leading to incorrect police intervention. This revelation follows CEO Sam Altman's previous statements that ChatGPT should offer privacy akin to a therapist, raising questions about the platform's confidentiality.

Users should be aware that ChatGPT's privacy is not absolute. While conversations are encrypted during transit, they are stored on OpenAI's servers and may be reviewed. Users can adjust privacy settings to limit data usage for model training, but complete confidentiality cannot be guaranteed.

AI generated content may differ from the original.

Published on 2 September 2025
aichatgptopenaiprivacysecurity
  • AI Nude Image Crackdown

    AI Nude Image Crackdown

    Read more about AI Nude Image Crackdown
  • AI Chatbots' Suicide Query Issues

    AI Chatbots' Suicide Query Issues

    Read more about AI Chatbots' Suicide Query Issues
  • OpenAI Faces Liability Claim

    OpenAI Faces Liability Claim

    Read more about OpenAI Faces Liability Claim
  • ChatGPT Linked to Murder-Suicide

    ChatGPT Linked to Murder-Suicide

    Read more about ChatGPT Linked to Murder-Suicide
ChatGPT Chats Face Scrutiny