FTC Probes AI Child Safety

FTC Probes AI Child Safety

11 September 2025

The Federal Trade Commission (FTC) has launched an inquiry into the potential risks posed by AI chatbots to children and teenagers. The FTC is ordering Alphabet (Google), Meta, OpenAI, and four other AI chatbot developers to provide information on how their technologies impact younger users.

The agency is concerned about the effects of AI chatbots acting as companions and is seeking data on company measures to evaluate safety, limit use by children, and inform users and parents of potential risks. Regulators will examine data storage, safety measures, and whether chatbots contribute to unsafe behaviour among minors. The FTC aims to understand if these chatbots, designed to mimic human interaction, could lead children to form inappropriate relationships or expose them to harmful content.

The inquiry, authorised under Section 6(b) of the FTC Act, allows the commission to conduct broad studies without a specific law enforcement purpose. The FTC's investigation follows concerns and lawsuits alleging harmful attachments and inappropriate interactions between minors and AI chatbots.

AI generated content may differ from the original.

Published on 11 September 2025
openaigoogleaiftcchildrenchatbotssafety
  • AGs Warn AI Giants

    AGs Warn AI Giants

    Read more about AGs Warn AI Giants
  • AI Chatbots' Harmful Teen Interactions

    AI Chatbots' Harmful Teen Interactions

    Read more about AI Chatbots' Harmful Teen Interactions
  • FTC Probes AI Child Safety

    FTC Probes AI Child Safety

    Read more about FTC Probes AI Child Safety
  • AI Chatbots' Suicide Query Issues

    AI Chatbots' Suicide Query Issues

    Read more about AI Chatbots' Suicide Query Issues