OpenAI is implementing new safety measures for ChatGPT users under 18, including an 'age prediction system' to identify minors and restrict their access to the standard chatbot. The system will default to the under-18 experience if a user's age cannot be confidently determined. This initiative follows concerns about the AI's impact on teen mental health and a lawsuit alleging ChatGPT's role in a teen's suicide.
The new measures include blocking graphic sexual content and preventing flirtatious conversations. ChatGPT will also limit discussions about suicide and self-harm, and attempt to contact parents or authorities if a user shows signs of suicidal thoughts. Parental controls will allow parents to link their accounts to their teen's, manage chat history, set blackout hours, and receive notifications if the system detects acute distress. These safeguards will be available by the end of September.
CEO Sam Altman stated that OpenAI prioritises safety over privacy and freedom for teens, recognising the need for significant protection for minors using this technology. The company is also working on technology to improve age prediction, potentially requesting identification in some cases.
Related Articles
OpenAI Enhances ChatGPT Safety
Read more about OpenAI Enhances ChatGPT Safety →OpenAI Adds ChatGPT Parental Controls
Read more about OpenAI Adds ChatGPT Parental Controls →ChatGPT Safety Under Scrutiny
Read more about ChatGPT Safety Under Scrutiny →OpenAI Faces Safety Scrutiny
Read more about OpenAI Faces Safety Scrutiny →