What happened
AI-generated images and videos exploiting deadly tornadoes in Southwest Michigan on March 6 went viral online, deceiving many seeking information. Within an hour of touchdown in Three Rivers, AI creators posted fictitious content, often without disclosure, garnering hundreds of comments and shares. Meta's Oversight Board, on March 10, 2025, called for the company to establish new rules for recognising AI-generated content and amend policies for timely responses to deceptive AI output, citing similar issues with Israel-Iran conflict imagery. One account monetising these posts has over 1 million followers.
Why it matters
Social media platforms face increased pressure to implement effective content provenance mechanisms as AI-generated disaster imagery exploits real-world events, deceiving users and monetising misinformation. Platform engineers and content moderation teams must now contend with algorithms amplifying engagement for fake content, even from negative reactions, as demonstrated by an account with over 1 million followers. This follows the Meta Oversight Board's March 10, 2025 call for new rules to reliably recognise AI-generated content and ensure timely responses to deceptive output.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




