What happened
X announced plans for new policies requiring creators to disclose artificial intelligence-generated videos depicting armed conflicts. Nikita Bier stated that failure to add an AI disclosure will result in a 90-day suspension from the platform's Creator Revenue Sharing programme; repeat violations will lead to permanent removal from the programme. X will identify non-compliant content via Community Notes or metadata and other generative AI signals, aiming to maintain information authenticity during wartime.
Why it matters
This policy links creator monetisation to content authenticity, shifting responsibility for AI-generated misinformation onto revenue-sharing participants. Content moderation teams gain a clear enforcement mechanism, using community fact-checking and technical AI detection signals. Procurement teams and platform engineers must prioritise tools integrating metadata analysis and AI detection capabilities. This follows Meta's recent ban on agency accounts with automated systems, indicating a broader industry trend tightening controls on AI-generated content in sensitive areas.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




