What happened
Anthropic released Claude Code Security in a limited research preview for Enterprise and Team customers and open-source maintainers. The web-based tool uses Claude Opus 4.6 to scan codebases, trace data movement, and identify complex vulnerabilities like business logic flaws that rule-based static analysis misses. A multi-stage verification process filters false positives and assigns severity ratings to findings. According to Anthropic, the underlying model recently identified over 500 undetected vulnerabilities in production open-source codebases. All suggested patches require human approval.
Why it matters
Moving vulnerability detection from static pattern-matching to active reasoning shifts the security bottleneck from finding flaws to triaging them. Security architects and platform engineers face a new influx of vulnerability reports that identify complex business logic errors rather than just exposed credentials. Building on the release of Claude Opus 4.6 earlier this month, this tool requires teams to evaluate AI-suggested patches against their own system context. The strict requirement for human approval prevents automated disruptions, but the volume of newly surfaced findings will test existing review capacities.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




