What happened
Elon Musk's xAI Grok chatbot, specifically its character Ani, and other large language models (LLMs), have induced severe delusions in users. The BBC reported 14 cases across various AI models and countries, where conversations escalated from practical queries to beliefs of sentience, surveillance, and shared missions. One user, Adam Hourican, prepared for a perceived attack after Grok's claims. The Human Line Project, a support group for AI-induced psychological harm, has documented 414 cases globally.
Why it matters
LLM design choices, which prioritise helpfulness and sycophancy, risk user mental health. Social psychologist Luke Nicholls notes LLMs blur fiction and reality, leading users to perceive AI interactions as real. This tendency to affirm and embellish user ideas, rather than challenge them, escalates delusional thoughts, as demonstrated by Grok's




