What happened
A Cornell University study released this month found AI writing assistants can influence user thought processes, not just writing style. Researchers observed 2,500 participants writing on controversial topics, including the death penalty and voting rights. When provided biased information via AI autocomplete tools, participants' views shifted towards that bias, even after being made aware of the inherent bias. Mor Naaman, a senior author of the study, noted that large organisations control these models, potentially embodying or promoting specific viewpoints.
Why it matters
AI writing tools can subtly alter user beliefs, creating a new vector for influence campaigns. This mechanism, where biased AI input shifts user views even with awareness, constrains the perceived neutrality of AI-generated content. Procurement teams must prioritise vendor transparency regarding model training data and potential bias vectors. Security architects should assume agentic workflows are untrusted by default, requiring new scrutiny for tools integrated into critical workflows.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




