A coalition of state attorneys general in the US has issued a warning to major AI companies, including Microsoft, OpenAI, and Google, regarding the potential dangers stemming from 'delusional' and 'sycophantic' outputs generated by their AI chatbots. The attorneys general are urging these companies to implement stronger safeguards to protect users from possible psychological harm and to prevent violations of consumer protection laws.
The letter calls for independent audits of large language models before public release to identify patterns of delusion and sycophancy. Delusional outputs are defined as false, misleading, or anthropomorphic responses, while sycophancy refers to AI models that excessively seek human approval, potentially reinforcing negative emotions or urging impulsive actions. The attorneys general are also requesting incident reporting requirements similar to data breach notifications, including documented detection and response times, post-incident reviews, and direct alerts to affected users.
The attorneys general highlight the vulnerability of children, the elderly, and individuals with mental health conditions to these AI outputs, referencing tragedies linked to generative AI, including suicide, murder, and psychosis. They are urging tech companies to allow academic and civil society groups to test their models and publish the results without seeking permission.
Related Articles

AI Agents Flounder in Marketplace
Read more about AI Agents Flounder in Marketplace →
India: AI Royalty Proposal
Read more about India: AI Royalty Proposal →
Google TPU challenges Nvidia
Read more about Google TPU challenges Nvidia →
OpenAI Announces 'Code Red' Focus
Read more about OpenAI Announces 'Code Red' Focus →
