LLMs Exhibit Addiction Stigma

LLMs Exhibit Addiction Stigma

26 July 2025

A recent study has revealed that large language models (LLMs) can perpetuate harmful stereotypes by employing stigmatising language when responding to queries related to addiction. Researchers found that over 35% of LLM responses concerning alcohol and substance use disorders contained stigmatising language. However, the study also demonstrated that using targeted prompts can significantly reduce the presence of such language in LLM outputs, decreasing it by as much as 88%.

The study involved testing 14 LLMs with clinically relevant prompts about alcohol use disorder, alcohol-associated liver disease and substance use disorder. Experts then assessed the responses for stigmatising language, using guidelines from the National Institute on Drug Abuse and the National Institute on Alcohol Abuse and Alcoholism. The findings indicated that longer responses were more likely to include stigmatising language.

The research highlights the importance of prompt engineering, which involves strategically crafting input instructions to guide models towards non-stigmatising language. By using patient-centred language, LLMs can build trust, improve patient engagement, and foster a more supportive environment for individuals affected by addiction. Further development of chatbots that avoid stigmatising language could improve patient outcomes.

AI generated content may differ from the original.

Published on 26 July 2025

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

LLMs Exhibit Addiction Stigma