CAMH Finds Psychiatric AI Bias

CAMH Finds Psychiatric AI Bias

8 April 2026

What happened

Researchers at the Centre for Addiction and Mental Health (CAMH) found AI models predicting aggression in acute psychiatric care amplify existing social and structural inequities. A study, published in npj Mental Health Research, trained a machine learning model on electronic health records from over 17,000 CAMH inpatients. The model exhibited higher false positive rates for Black and Middle Eastern individuals, men, police-admitted patients, and those with unstable housing, overestimating aggression likelihood for these marginalised groups.

Why it matters

AI models deployed in clinical settings risk compounding existing health disparities by disproportionately flagging already over-surveilled or structurally disadvantaged groups as high risk. For procurement teams and security architects evaluating AI tools, this demonstrates a critical need for fairness analysis beyond overall accuracy metrics. Without explicit fairness built into model design and evaluation, clinical decisions based on these tools could erode patient trust and precipitate adverse incidents. This follows recent findings where AI models predicted domestic abuse, raising similar concerns about bias amplification.

AI generated content may differ from the original.

Published on 8 April 2026

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

CAMH Finds Psychiatric AI Bias