What happened
The US military expanded AI use for pinpointing airstrike targets in Iran, deploying Palantir's software, which integrates Anthropic's Claude AI. Claude assists military analysts in sifting vast intelligence data, accelerating target identification from hours or days to seconds, per Adm. Brad Cooper. This follows the Defence Secretary's clash with Anthropic over AI use limitations, leading to the Defence Department labelling Anthropic a national security threat, threatening its removal from military use in the coming months.
Why it matters
Increased military reliance on AI for target identification immediately raises questions about human oversight and accountability in lethal decision-making. Procurement teams face new complexities in vendor relationships, as the Defence Department labelled Anthropic a national security threat, threatening its removal from military use in the coming months. Security architects must now account for AI systems accelerating intelligence analysis, shifting processes from hours to seconds, while lawmakers demand strict guardrails to ensure human judgement remains central to lethal force decisions.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




