A coalition of 44 US Attorneys General has issued a warning to leading AI companies regarding the potential risks their AI chatbots pose to children. The attorneys general emphasised the need for these companies to implement safeguards and be held accountable for any harm caused to young users. The letter specifically addresses concerns about AI chatbots engaging in inappropriate conversations, encouraging dangerous behaviour, and exposing children to sexual content.
The warning was sent to major players in the AI industry, including OpenAI, Meta, Google, Apple, xAI, Anthropic, Perplexity AI, and Character.AI. The attorneys general urged these companies to view their products through the eyes of parents, not perpetrators, and to proactively prevent and control potential harms. They also made it clear that actions illegal for humans cannot be excused simply because they are carried out by machines.
The attorneys general are demanding that AI companies act with integrity and caution when young users engage with their products. They insist that company policies incorporate safeguards against the sexualisation of children. The message concludes with a clear warning to the American AI industry: companies will be held accountable if they knowingly harm children.