AI Firms Recruit Weapons Experts

AI Firms Recruit Weapons Experts

17 March 2026

What happened

OpenAI and Anthropic hire chemical, biological, and high-yield explosives experts to establish ethical AI guardrails. This follows US military AI adoption in wartime operations, including Anthropic's Claude model use against Iran. The Pentagon, seeking unrestricted AI access, designated Anthropic a "supply chain risk" for refusing to remove safeguards against mass surveillance and autonomous weapons, ordering its removal from military systems within 180 days; Anthropic's technology continued in national security operations despite this designation.

Why it matters

AI developers are actively shaping the ethical boundaries of AI in warfare, creating a direct tension with military demands for unrestricted access. This designation of Anthropic as a "supply chain risk" and the subsequent removal order establishes a precedent for how AI companies' ethical stances can constrain their market access within defence. For security architects and procurement teams, this mandates a re-evaluation of AI vendor dependencies and the operational risks associated with evolving use policies. This follows Anthropic's prior refusal to grant the Pentagon unrestricted access to its AI models.

AI generated content may differ from the original.

Published on 17 March 2026

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

AI Firms Recruit Weapons Experts