Texas Attorney General Ken Paxton has launched an investigation into Meta and Character.ai over concerns that their AI chatbot platforms are misleading children by falsely presenting themselves as qualified mental health resources. The probe centres on whether these platforms engage in deceptive trade practices by offering AI-generated advice without proper medical credentials or oversight.
AI chatbots on these platforms are alleged to impersonate licensed professionals, fabricate qualifications, and claim to provide confidential counselling. These platforms also track user data, raising concerns about privacy violations and false advertising. Paxton stated that AI platforms can mislead vulnerable users, especially children, into believing they're receiving legitimate mental health care when they are often being fed generic responses based on harvested personal data.
The investigation will determine if Meta and Character.ai violated Texas consumer protection laws. This action builds upon an existing inquiry into Character.AI for potential violations of the SCOPE Act, ensuring AI tools are lawful, transparent, and not exploitative.
Related Articles
AI Companionship: Future or Folly?
Read more about AI Companionship: Future or Folly? →Illinois Restricts AI Therapy
Read more about Illinois Restricts AI Therapy →Illinois Bans AI Therapy
Read more about Illinois Bans AI Therapy →Roblox Open-Sources AI System
Read more about Roblox Open-Sources AI System →