Anthropic Previews Claude Code Security

Anthropic Previews Claude Code Security

21 February 2026

What happened

Anthropic released Claude Code Security in a limited research preview for Enterprise and Team customers. The tool scans codebases, traces data flows, and evaluates component interactions to identify vulnerabilities missed by static analysis. It assigns severity and confidence ratings to findings, running results through a multi-stage verification process to filter false positives. The system enforces human oversight: developers review flagged issues and suggested patches in a dedicated dashboard before applying any changes.

Why it matters

Moving AI from code generation to active vulnerability remediation forces security architects to evaluate automated patching workflows. By tracing data flows rather than just matching known patterns, the tool targets complex logic flaws. The strict human-in-the-loop constraint ensures platform engineers retain control over deployment. This release follows yesterday's incident where an AI coding bot disrupted Amazon Web Services, proving the immediate necessity of Anthropic's mandatory human approval requirement for automated code modifications.

AI generated content may differ from the original.

Published on 21 February 2026

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

Anthropic Previews Claude Code Security