What happened
Governments deploying AI in public administration, national security, and policymaking face accountability questions. A dispute between the Pentagon and AI company Anthropic emerged after Anthropic refused to remove safeguards preventing mass surveillance and autonomous weapons use. This incident highlights tension between state AI deployment goals and company control over system safeguards. Experts Raman Jit Singh Chima and Isha Suri warn against deploying AI without clear objectives, necessity tests, or considering privacy implications, noting efficiency claims often lack evidence and can lead to data misuse.
Why it matters
Government AI adoption faces significant governance and data control challenges, constraining state deployment for national security and public administration. AI companies' control over model safeguards and data usage policies directly limits government capabilities, as seen in the Pentagon-Anthropic dispute. Procurement teams and legal counsel must scrutinise vendor terms, ensuring necessity and proportionality tests are met before deployment. This follows recent US mandates for strict AI terms.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




