Russian cyber actors are exploring methods of injecting propaganda and disinformation into the training data of generative AI models. This manipulation, sometimes called "LLM grooming", involves gaming search engine algorithms to amplify pro-Russian narratives, increasing the likelihood that large language models will absorb and regurgitate them.
This poses a significant threat, as AI models trained on poisoned data may misinform users and potentially become vectors for cyber-espionage. Tactics include fabricating claims and using AI to refine messaging, generate politically motivated content, and enhance social media engagement. A network dubbed "Pravda" has been identified systematically injecting AI chatbots with false narratives by gaming search engines and web crawlers.
Security experts recommend a layered approach to securing AI, combining traditional security principles with AI-specific measures such as adversarial testing and model filtering. The goal is to erode trust and manipulate public opinion.