The Internet Watch Foundation (IWF) reports a significant increase in AI-generated child sexual abuse imagery. In the first half of 2025, reports of such imagery have surged by 400%. The IWF's analysis reveals a disturbing trend: AI is now capable of producing increasingly realistic and severe depictions of child sexual abuse, including deepfake videos.
These AI-generated images are appearing on open platforms, making them easily accessible. Experts are finding it difficult to distinguish between AI-generated and real abuse images. The UK government is tightening legislation to criminalise the creation, possession, or sharing of AI tools used for generating child sexual abuse material, as well as instruction manuals for creating such content. The IWF has launched Image Intercept, a tool that blocks known child abuse imagery using a database of over 2 million files.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




