What happened
Palantir CEO Alex Karp stated the US Department of Defence (DoD) is not using AI for domestic mass surveillance, nor does it plan to, amidst an ongoing dispute between the DoD and Anthropic. Palantir serves as the primary channel for the DoD's use of Anthropic's large language model, Claude, with Karp confirming, "It's our stack that runs the LLMs". The conflict arose from Anthropic's insistence on contractual safeguards against domestic surveillance and autonomous weapons, which the DoD rejected, leading Secretary of Defence Pete Hegseth to designate Anthropic a "supply-chain risk". Anthropic has since sued the Pentagon over this designation.
Why it matters
This dispute highlights the tension between AI developers' ethical safeguards and government demands for unrestricted technology use in national security contexts. For procurement teams and legal architects, it underscores the increasing complexity of vendor contracts involving frontier AI, particularly regarding usage terms and sovereign control over advanced capabilities. The DoD's "supply-chain risk" designation signals a hardening stance on vendor compliance, potentially limiting future partnerships for AI firms unwilling to cede full control over their models for military applications. Founders and investors must re-evaluate the long-term viability and ethical implications of government contracts for dual-use AI technologies.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




