What happened
Anthropic refused the Pentagon's demand for unrestricted use of its AI models, including Claude and the advanced Mythos, for all lawful purposes, leading the Pentagon to designate Anthropic a supply-chain risk. Mythos, deemed “too dangerous” for public release, demonstrates capabilities in autonomously identifying and exploiting cybersecurity vulnerabilities. This dispute arose after the Pentagon insisted on “any lawful use” clauses in AI procurement contracts, seeking models without vendor-imposed usage constraints.
Why it matters
This conflict illustrates a potential control gap for national defence, where private AI developers could dictate military application of frontier models. Procurement teams and security architects face increased complexity in acquiring AI, balancing advanced capabilities with vendor usage policies and national security requirements. This contrasts with China's strategy deploying open-source models like DeepSeek for military applications, including battle simulations, without similar corporate governance restrictions, potentially creating an asymmetric threat in AI-driven warfare.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




