NCMEC Reports AI CSAM Surge

NCMEC Reports AI CSAM Surge

1 March 2026

What happened

NCMEC's CyberTipline received over one million generative AI-related reports between January and September 2025, per Fallon McNulty. Homeland Security Investigations reported a 600% increase in child exploitation and generative AI reports in H1 2025 compared to 2023 and 2024 combined, according to Michael Prado. Bad actors exploit open-source AI models and platforms like Bashable.art and undress.ai to create increasingly realistic child sexual abuse material (CSAM), including images of real and nonexistent children, overwhelming law enforcement.

Why it matters

The proliferation of AI-related child exploitation reports creates immediate legal and operational risks for platform engineers and founders developing generative AI tools. Unmoderated open-source models and smaller platforms enable the creation of indistinguishable illicit content, increasing the burden on security architects to implement effective content moderation and victim identification systems. Legal teams face escalating prosecution challenges as AI-generated material complicates victim identification and evidence collection, demanding new forensic capabilities.

AI generated content may differ from the original.

Published on 1 March 2026

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

NCMEC Reports AI CSAM Surge