US Mandates Model-Usage Control — Vendors Lose Deployment Autonomy
The Trump administration issued strict AI guidelines for civilian contracts, requiring vendors to grant governments an irrevocable licence for "any lawful" use and prohibit intentionally encoded partisan judgements. The GSA terminated Anthropic's OneGov deal, removing AI services from all US federal branches. Anthropic CEO Dario Amodei responded by announcing a legal challenge against the DOD's "supply-chain risk" designation, arguing the label is "legally unsound" and refusing to allow AI for mass surveillance or fully autonomous weapons regardless of the outcome.
OpenAI's robotics head Caitlin Kalinowski resigned, citing concerns about AI surveillance without judicial oversight and lethal autonomy without human authorisation. She called the Pentagon agreement "rushed." Australia's government warned tech giants to align AI deployments with national values or face strict regulation, with enforcement beginning immediately.
Claude became the #1 free app on Apple's US store after the government blacklisting, bypassing procurement restrictions via consumer channels.
Vendors face a choice: accept government control mandates and cede deployment flexibility, or refuse and surrender government contracts. Because government procurement in the US and Australia now requires control relinquishment — terms that conflict with military-grade capabilities — vendor margins in government markets compress and decision timelines extend as legal challenges work through courts.
The other side: The US has historically retreated from strict vendor-control mandates under industry lobbying. Data export restrictions (2012–2016) were substantially watered down after sustained industry pressure. Anthropic's legal challenge may succeed on constitutional grounds, meaning these mandates don't harden into permanent procurement requirements.
---