US Defense Bans Anthropic Claude

US Defense Bans Anthropic Claude

5 March 2026

What happened

US defense technology companies, including Lockheed Martin, are instructing employees to cease using Anthropic's Claude AI models. This follows the Trump administration's designation of Anthropic as a supply chain risk, blacklisting the company from US defense contracts. Venture capital firm J2 Ventures confirmed at least 10 portfolio companies, involved in defense contracts, have already begun replacing Claude in related use cases. Anthropic previously refused Pentagon demands for unrestricted AI use, citing concerns over autonomous weapons.

Why it matters

Access to frontier AI models for defense applications now carries sovereign premiums, forcing immediate vendor re-evaluation. The Trump administration's supply chain risk designation directly blocks Anthropic from US defense contracts, compelling defense contractors like Lockheed Martin to remove Claude from their systems. Procurement teams must now factor geopolitical risk into AI vendor selection, prioritising models with clear government approval for sensitive workloads. This follows Anthropic's refusal of Pentagon demands for unrestricted AI access.

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

US Defense Bans Anthropic Claude