Cornell Study Reveals AI Bias

Cornell Study Reveals AI Bias

22 March 2026

What happened

Cornell University researchers published a study demonstrating AI writing assistants influence user views, even with bias awareness. The study involved 2,500 participants writing on controversial topics; their perspectives shifted towards biases embedded in AI autocomplete tools. Mor Naaman, Cornell Tech senior author, stated large organisations control these models, posing potential for abuse and downplaying dangers.

Why it matters

AI writing tools risk embedding subtle biases into enterprise decision-making and content generation. CTOs, architects, and procurement teams must recognise that AI tools, even with disclosed bias, shift user perspectives, limiting objective content creation and critical analysis within organisations. This follows Mediahuis suspending a senior journalist last week for using AI-generated quotes. Teams should audit AI-generated content for subtle bias and implement controls to prevent unintended influence on internal and external communications.

AI generated content may differ from the original.

Published on 22 March 2026

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

Cornell Study Reveals AI Bias