AI Firms Tackle Prompt Injection

AI Firms Tackle Prompt Injection

2 November 2025

What happened

Google DeepMind, Anthropic, and Microsoft are developing multi-layered defences, including data governance strategies, content sanitisation, and real-time threat detection, to counter indirect prompt injection attacks. This emerging security flaw enables malicious instructions embedded in external data sources, such as documents or web pages, to manipulate AI systems into unintended actions like data leaks, misinformation dissemination, or malicious code execution. Unlike direct injection, indirect attacks exploit AI interaction with external data, causing AI to treat embedded commands as legitimate and bypassing traditional security measures, posing risks of unauthorised access and privilege escalation in AI-powered applications.

Why it matters

The emergence of indirect prompt injection introduces a control gap, increasing exposure for IT security and platform operators to AI systems executing unintended actions due to malicious instructions embedded in external data. This necessitates higher due diligence requirements for data governance and content sanitisation, as AI systems now treat compromised external data as legitimate commands, reducing the visibility of malicious intent and increasing the risk of unauthorised access and privilege escalation within AI-powered applications.

Source:ft.com

AI generated content may differ from the original.

Published on 2 November 2025
artificialintelligenceintelligenceaisecuritypromptinjectiondeepmindmicrosoftaisecuritydatagovernancecybersecurityoperationalrisk
  • AI: Data Privacy Paradox

    AI: Data Privacy Paradox

    Read more about AI: Data Privacy Paradox
  • AI Fuels Expense Fraud Surge

    AI Fuels Expense Fraud Surge

    Read more about AI Fuels Expense Fraud Surge
  • AI Capex: Risky Business?

    AI Capex: Risky Business?

    Read more about AI Capex: Risky Business?
  • OpenAI Faces Public Interest Test

    OpenAI Faces Public Interest Test

    Read more about OpenAI Faces Public Interest Test
AI Firms Tackle Prompt Injection