What happened
Online creators monetised a significant wave of AI-generated disinformation about the US/Israel-Iran conflict, which began on February 28, accumulating hundreds of millions of combined views. BBC Verify analysis identified numerous AI-generated videos, including fake rocket strikes in Tel Aviv and a burning Burj Khalifa, alongside fabricated satellite images depicting destruction at the US Navy's Fifth Fleet base in Bahrain. Google's SynthID identified one such satellite image as AI-generated or edited by a Google AI tool. X responded by announcing temporary suspension from its monetisation programme for creators publishing unlabelled AI-generated armed conflict footage.
Why it matters
The proliferation of easily accessible, low-cost AI tools for generating realistic synthetic media shifts the cost curve for disinformation campaigns. For content moderation teams, the volume and sophistication of AI-generated content now overwhelm traditional verification methods, eroding trust in online information. Security architects must assume a higher baseline of synthetic media in public discourse. Platform engineers face increased pressure to implement effective detection and labelling mechanisms, as X's action demonstrates a direct link between unlabelled AI content and platform revenue.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




