Anthropic Refuses Military AI Terms

Anthropic Refuses Military AI Terms

7 March 2026

What happened

AI developer Anthropic refused to remove safeguards preventing Department of Defense use for autonomous lethal weapons or mass surveillance. OpenAI subsequently secured a deal with the Pentagon, with CEO Sam Altman acknowledging the agreement looked "opportunistic and sloppy" and that OpenAI does not control the Pentagon's use of its products. This follows reports of Anthropic's Claude being used by US Central Command for intelligence assessments and target identification in an offensive against Iran, during which an estimated thousand-plus civilians were killed.

Why it matters

Access to frontier AI models for defence applications now carries significant geopolitical and supply chain risk, as demonstrated by Anthropic's refusal to remove safeguards. Procurement teams must scrutinise vendor terms for military use clauses, while security architects must account for AI's role in accelerated decision cycles and mass targeting. This shift, where AI identifies targets at machine speed, demands immediate review of vendor relationships and internal governance for AI deployment in sensitive operations.

AI generated content may differ from the original.

Published on 7 March 2026

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

Anthropic Refuses Military AI Terms