What happened
Defense Secretary Pete Hegseth issued Anthropic's CEO, Dario Amodei, a Friday deadline to permit unrestricted military use of its Claude AI technology, or risk losing its government contract. Amodei has maintained ethical concerns regarding fully autonomous military targeting operations and domestic surveillance of US citizens. Pentagon officials warned they could designate Anthropic a supply chain risk or use the Defense Production Act to compel access, despite Anthropic being one of four AI companies, alongside Google, OpenAI, and xAI, approved for classified military networks with contracts up to $200 million.
Why it matters
This ultimatum forces AI developers to choose between ethical guardrails and lucrative government contracts, directly impacting their business models and product roadmaps. Procurement teams now face a narrowed vendor landscape, prioritising those without usage restrictions for defence-related projects. Military-grade AI capabilities becoming standard requirements shifts the risk profile of AI deployments.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




