AI Chatbots: Human-like Gullibility

AI Chatbots: Human-like Gullibility

29 August 2025

Researchers have discovered that AI chatbots exhibit vulnerabilities to manipulation akin to those seen in humans. This susceptibility raises concerns as AI becomes increasingly integrated across various industries. The ease with which these systems can be deceived highlights potential risks, particularly regarding the spread of misinformation or biased content.

Experts suggest that AI's gullibility stems from its design, which prioritises generating plausible-sounding text over verifying factual accuracy. This can lead to the AI systems being manipulated to favour certain viewpoints. The rise of 'human-washing,' where AI bots pretend to be human, further complicates the issue, potentially leading to unethical practices such as scams.

As AI technology advances, addressing these vulnerabilities is crucial to ensure its responsible and ethical deployment. Measures to enhance AI's critical thinking and verification processes are essential to mitigate the risks associated with their susceptibility to manipulation.

AI generated content may differ from the original.

Published on 28 August 2025

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

AI Chatbots: Human-like Gullibility