What happened
The US-Israel military deployed Anthropic's Claude AI model, integrated into Palantir's Maven Smart System, during recent strikes on Iran. This system shortened the "kill chain" by identifying targets, assisting approval, and launching strikes, with initial attacks described as fast and numerous. This deployment occurred despite Anthropic's public stance against its technology being used for mass surveillance or fully autonomous weapons.
Why it matters
AI deployment in military operations accelerates the "kill chain" and challenges traditional human oversight models. For defence strategists and procurement teams, this demonstrates AI's capacity to automate target identification and strike assistance, challenging traditional human oversight models. The use of Claude, despite Anthropic's ethical guidelines, highlights the tension between developer ethics and operational demands for frontier AI, prompting employees at companies like Google and OpenAI to demand stricter limits on military AI applications.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




