Anthropic Denies AI Sabotage Capability

Anthropic Denies AI Sabotage Capability

21 March 2026

What happened

Anthropic executive Thiyagu Ramasamy stated in a court filing that the company cannot manipulate its Claude generative AI model once deployed by the US military, denying any ability to stop, alter, or shut off access. This statement responds to accusations from the Trump administration about potential tampering. The Pentagon designated Anthropic a supply-chain risk this month, preventing Department of Defence use and prompting other federal agencies to abandon Claude. Anthropic filed two lawsuits challenging the ban, seeking an emergency order, with a hearing scheduled for March 24; government attorneys argue the DoD need not tolerate risks to military systems.

Why it matters

Access to frontier AI models for national security operations faces new sovereign control constraints. The Pentagon's supply-chain risk designation for Anthropic, despite the company's guarantees against remote sabotage and data access, forces procurement teams to re-evaluate vendor lock-in and operational independence for critical AI deployments. This follows the Department of Defence's recent ban on Anthropic AI tools, highlighting a growing tension between commercial AI development and government control over deployed capabilities. Security architects must now assume potential vendor-side control risks, even with on-premise or cloud-isolated deployments.

Source:wired.com

AI generated content may differ from the original.

Published on 21 March 2026

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

Anthropic Denies AI Sabotage Capability