FTC Probes AI Child Safety

FTC Probes AI Child Safety

4 September 2025

The Federal Trade Commission (FTC) is set to examine the potential risks AI chatbots pose to children's mental well-being. The FTC is preparing to request internal documents from leading AI companies, including OpenAI, Meta Platforms, and Character.AI.

The investigation will focus on how children are using these AI tools, what safeguards are in place, and the possible risks to young users. The FTC aims to evaluate the ethical considerations and safety measures associated with AI technology. This inquiry follows reports of inappropriate chatbot behaviour and is an initial step to assess potential harms and determine if further regulatory action is needed.

Tech companies have already begun implementing measures to prevent harmful interactions with minors, such as restricting access to certain AI characters and training systems to avoid inappropriate conversations. The FTC's actions highlight growing concerns about the impact of AI on vulnerable populations and the need for responsible AI development.

AI generated content may differ from the original.

Published on 4 September 2025
aiopenaiftcchildrenmentalhealthregulation
  • AGs Warn AI Giants

    AGs Warn AI Giants

    Read more about AGs Warn AI Giants
  • AI Chatbots' Harmful Teen Interactions

    AI Chatbots' Harmful Teen Interactions

    Read more about AI Chatbots' Harmful Teen Interactions
  • ChatGPT Adds Parental Distress Alerts

    ChatGPT Adds Parental Distress Alerts

    Read more about ChatGPT Adds Parental Distress Alerts
  • AI Chatbots' Suicide Query Issues

    AI Chatbots' Suicide Query Issues

    Read more about AI Chatbots' Suicide Query Issues