What happened
AI chatbots are observed to recommend political parties, contravening developer pledges against influencing voting choices. Research indicates that brief interactions with these chatbots can shift user political views, irrespective of initial stances. This occurs due to inherent biases within AI models' training data, resulting in skewed recommendations. Existing filters designed to prevent overt politicisation are demonstrably bypassable, increasing the potential for AI to sway public opinion as it integrates into search engines and information sources.
Why it matters
The demonstrated bypassability of politicisation filters in AI chatbots introduces a significant control gap for platform operators and compliance teams. This weakens the ability to ensure AI outputs align with corporate neutrality pledges, increasing exposure to the dissemination of biased political recommendations. Consequently, due diligence requirements for managing AI model training data and output validation are heightened, placing an increased oversight burden on IT security and procurement when integrating or deploying AI systems that interact with users.




