Anthropic Blocks Military AI Use

Anthropic Blocks Military AI Use

2 March 2026

What happened

Anthropic refused US Pentagon requests to deploy its AI models for domestic surveillance and lethal autonomous weapons, stalling negotiations. This led to political criticism, with former President Trump calling the firm 'Left Wing, Woke' and a commentator labelling it a 'Supply Chain Risk,' implying companies partnering with Anthropic could face barriers to US military contracts. This contrasts with OpenAI, which secured a deal with the Pentagon for its models. This dispute emerged as the US and Israel deployed AI in West Asia.

Why it matters

This dispute establishes a precedent for US government intervention in AI firm partnerships, impacting procurement teams and national security architects. Limiting military access to advanced AI capabilities due to political friction creates uncertainty for defence contractors. For founders and investors, this signals increased political risk in aligning AI development with specific ethical boundaries, potentially influencing future defence contracts and market access.

AI generated content may differ from the original.

Published on 2 March 2026

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

Anthropic Blocks Military AI Use