NCMEC Data Reveals AI Safety Gap

NCMEC Data Reveals AI Safety Gap

27 February 2026

What happened

OpenAI filed 75,027 CyberTipline reports to the National Center for Missing & Exploited Children (NCMEC) in the first half of 2025, according to its transparency report. Anthropic reported 859 images to NCMEC between April 2024 and March 2025, per its published child safety commitments report. A Forbes opinion column framed this disparity as evidence that human-led oversight outperforms Anthropic's "Constitutional AI" approach, but that framing conflates distinct concepts.

Constitutional AI is Anthropic's model alignment methodology — a training technique that uses AI feedback against written principles to shape model behaviour. It is not Anthropic's CSAM detection system. Both OpenAI and Anthropic use automated hash-matching technology against the NCMEC database to detect known child sexual abuse material. The reporting gap primarily reflects the difference in platform type: OpenAI processes billions of user-uploaded images through DALL-E, Sora, and ChatGPT, while Anthropic's Claude handles a much smaller volume of image inputs. NCMEC flagged 17 companies, including Anthropic, for sparse reporting volumes.

Why it matters

For security architects and procurement teams, raw NCMEC report volumes cannot serve as a reliable proxy for safety effectiveness without adjusting for platform type, user base size, and image processing volume. The meaningful metric is detection rate relative to exposure, not absolute report count. Teams evaluating AI providers should examine the specific detection mechanisms deployed — hash-matching, classifiers, human review — and their coverage across input types, rather than relying on headline comparisons across fundamentally different platforms.

Correction: An earlier version of this story relied on figures and framing from a Forbes opinion column that incorrectly characterised Constitutional AI as Anthropic's CSAM detection method and cited an unverified Anthropic NCMEC figure. The story has been rewritten using verified data from OpenAI's transparency report and Anthropic's child safety commitments report.

Source:forbes.com

AI generated content may differ from the original.

Published on 27 February 2026

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

NCMEC Data Reveals AI Safety Gap