Pentagon Labels Anthropic AI Risk

Pentagon Labels Anthropic AI Risk

6 March 2026

What happened

The US Department of War designated Anthropic's Claude AI as a "supply-chain risk" due to the company's ethical guardrails, choosing OpenAI for business instead. This designation jeopardises Anthropic's defence contracts and highlights the tension between developers' ethical principles and government procurement needs. The Department of War's decision follows Anthropic's reluctance to allow model use without additional safeguards, which the company stated it could not "in good conscience accede to."

Why it matters

Anthropic's ethical stance became a commercial liability in defence procurement, as the Department of War's "supply-chain risk" designation jeopardises its contracts. This event raises awareness for procurement teams and investors evaluating AI vendors, demonstrating how ethical positions can limit market access for defence applications. This situation contributes to ongoing dialogues about AI chatbots in war simulations and the accelerating integration of AI into defence, prompting consideration of future policy and vendor selection.

AI generated content may differ from the original.

Published on 6 March 2026

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

Pentagon Labels Anthropic AI Risk