Anthropic Launches AI Code Security

Anthropic Launches AI Code Security

28 February 2026

What happened

Anthropic released Claude Code Security, an AI-powered tool designed to scan software codebases for vulnerabilities and recommend targeted patches. The system uses Claude Opus 4.6 to identify complex flaws like business logic errors and broken access controls by reasoning through code similar to a human security researcher. Anthropic claims its model discovered over 500 previously undetected vulnerabilities in production open-source codebases. Following the announcement, shares of major cybersecurity firms, including CrowdStrike, Okta, Cloudflare, SailPoint, and Zscaler, experienced sharp declines, wiping billions of dollars off market capitalisations.

Why it matters

The market's immediate sell-off in cybersecurity stocks signals investor concern that AI-driven automation disrupts traditional security software models and compresses service margins. For security architects and procurement teams, this shift necessitates evaluating AI-native orchestration and verifiable trust controls, as autonomous coding pushes more decisions into machines. This event forces a re-evaluation of the competitive landscape for established security vendors, shifting focus to integrating AI capabilities rather than relying solely on conventional rule-based analysis.

AI generated content may differ from the original.

Published on 28 February 2026

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

Anthropic Launches AI Code Security