Russians Target AI Training Data

Russians Target AI Training Data

17 July 2025

Russian cyber actors are exploring methods of injecting propaganda and disinformation into the training data of generative AI models. This manipulation, sometimes called "LLM grooming", involves gaming search engine algorithms to amplify pro-Russian narratives, increasing the likelihood that large language models will absorb and regurgitate them.

This poses a significant threat, as AI models trained on poisoned data may misinform users and potentially become vectors for cyber-espionage. Tactics include fabricating claims and using AI to refine messaging, generate politically motivated content, and enhance social media engagement. A network dubbed "Pravda" has been identified systematically injecting AI chatbots with false narratives by gaming search engines and web crawlers.

Security experts recommend a layered approach to securing AI, combining traditional security principles with AI-specific measures such as adversarial testing and model filtering. The goal is to erode trust and manipulate public opinion.

Source:ft.com

AI generated content may differ from the original.

Published on 17 July 2025
artificialintelligenceintelligenceopenaiaidisinformationpropagandacybersecurityrussia
  • AI Fuels Smishing Surge

    AI Fuels Smishing Surge

    Read more about AI Fuels Smishing Surge
  • Amazon Deepens AI Alliance

    Amazon Deepens AI Alliance

    Read more about Amazon Deepens AI Alliance
  • AI Talent Compensation Soars

    AI Talent Compensation Soars

    Read more about AI Talent Compensation Soars
  • AI Voice Cloning Scams

    AI Voice Cloning Scams

    Read more about AI Voice Cloning Scams