What happened
The US government designated AI developer Anthropic a supply-chain risk, leading to failed negotiations with the Pentagon for its Claude AI model. Anthropic announced it would legally challenge the designation, a classification typically reserved for foreign entities. Concurrently, OpenAI secured a Pentagon deal on February 28, 2026. This prompted an estimated 295% surge in ChatGPT uninstalls and the resignation of an OpenAI executive over concerns about rushed implementation without guardrails. The Pentagon had sought to alter existing contract terms with Anthropic regarding the use of its AI models.
Why it matters
This sequence of events introduces significant risk for startups pursuing government contracts. Procurement teams face increased scrutiny over vendor stability and ethical use policies, particularly when existing contract terms are subject to unilateral change. For founders, the potential for public backlash and supply-chain risk designations now accompanies the financial incentives of defence work, especially for high-profile AI technologies. The designation of an American company as a supply-chain risk, a label typically applied to foreign adversaries, sets a precedent for domestic tech firms engaging with the defence sector.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




