What happened
Following deadly tornadoes in Southwest Michigan on March 6, AI-generated images and videos depicting the disaster rapidly went viral online, often without disclosure. One Facebook page posting such AI visuals gained over 1 million followers, offering a $1 subscription. On March 10, Meta's Oversight Board called for new rules for AI-generated content depicting real-world events, citing the Israel-Iran conflict, and timely responses to deceptive AI output. Meta currently requires users to employ an AI disclosure tool for photorealistic video or realistic audio.
Why it matters
Undisclosed AI-generated content during a crisis undermines public access to accurate information, creating confusion and panic. This mechanism allows bad actors to monetise misinformation, as one account gained over 1 million followers through deceptive posts. For content moderation teams and platform architects, this necessitates immediate implementation of effective detection and mandatory disclosure systems for AI-generated media, particularly concerning real-world events. The Oversight Board's intervention, following its March 10 rebuke of Meta's AI handling, highlights the need for platforms to enforce clear policies and tools.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




