The Federal Trade Commission (FTC) has launched an inquiry into AI chatbots and their potential risks to children and teenagers. The FTC is requesting data from seven tech companies, including Alphabet, Meta, OpenAI, and X.AI, regarding their safety measures for AI companions.
The investigation will assess how these companies evaluate chatbot safety, limit potential harm to younger users, and inform users and parents about associated risks. The FTC highlights the ability of AI chatbots to mimic human interaction, potentially leading children to form trusting relationships. The commission is interested in the companies' methods for user engagement, data processing, character development, impact monitoring, and compliance with the Children's Online Privacy Protection Act Rule.
The FTC's inquiry, authorised under Section 6(b), aims to understand the effects of chatbots on children and ensure the United States remains a leader in AI innovation. The commission voted 3-0 to issue the orders.