What happened
TikTok introduced a new user control feature within the 'Manage Topics' section, enabling users to adjust the volume of AI-generated content displayed in their 'For You' feed, mirroring existing content category preferences. Concurrently, TikTok expanded its AI content labelling mechanisms by testing 'invisible watermarks' designed for resilience against removal, augmenting the established Content Credentials system which embeds metadata. This initiative aims to enhance detection of AI content, particularly after editing or re-sharing, and reinforces the requirement for creators to label realistic AI-generated material.
Why it matters
The introduction of user-controlled AI content visibility and resilient watermarking mechanisms introduces a new operational constraint for platform operators, who must now ensure accurate content categorisation and effective watermark persistence across re-shared or edited media. While enhancing detection capabilities, the inherent challenge of identifying all manipulated AI content increases exposure for IT security and compliance teams to potentially unlabelled or miscategorised AI-generated material. This raises due diligence requirements for maintaining content integrity and managing the visibility gap concerning AI content provenance.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




