Despite pledges from AI companies to avoid influencing voting choices, concerns are rising as chatbots are seemingly recommending political parties. This raises questions about the subtle ways AI might sway public opinion. Research indicates that even brief interactions with biased chatbots can shift users' political views, regardless of their initial stance.
Specifically, studies reveal that AI models often exhibit inherent biases from their training data, leading to skewed recommendations. While some chatbots are designed with filters to prevent overt politicisation, these measures can be easily bypassed. The political leaning of AI chatbots is a growing issue, especially as they become more integrated into search engines and information sources.
As AI increasingly shapes the information landscape, experts suggest that user education about AI systems may help mitigate manipulation. There is also a need to reassess how this technology is used to ensure AI remains a useful tool without undermining trust in democratic processes.




