What happened
The Trump administration established strict AI guidelines for civilian contracts, requiring vendors to grant an irrevocable licence for "any lawful" use of their models and prohibit intentionally encoded partisan judgments. This follows the Pentagon designating Anthropic a "supply-chain risk" and barring its technology from military contracts due to disputes over safeguards; the General Services Administration (GSA) terminated Anthropic's OneGov deal, removing its AI services from all US federal branches. These GSA rules mirror measures considered for military contracts, standardising procurement across government.
Why it matters
Access to US government AI contracts now demands vendors relinquish control over model usage and content. Procurement teams and founders seeking federal business must accept broad, irrevocable licensing terms and ensure models lack embedded biases, shifting liability and operational control to the government. This move, following a dispute over Anthropic's insistence on safeguards and proposed usage restrictions for Claude with the Pentagon, establishes a precedent for sovereign control over AI capabilities, impacting product roadmaps and compliance for all potential government suppliers.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




