What happened
Ofcom, the UK media regulator, initiated an investigation into X regarding its Grok AI chatbot's generation of sexualised images depicting women and children. This action introduces formal regulatory scrutiny of X's content moderation and AI safety protocols, with potential outcomes including a ban on Grok or a multi-million-pound financial penalty against X. This changes the regulatory status of X's AI offerings in the UK.
Why it matters
This investigation introduces a significant regulatory compliance exposure for X's platform operators and legal teams, specifically concerning AI-generated content. The existing control over content moderation for AI outputs is now under direct regulatory challenge, increasing the due diligence burden on platform governance and risk management functions to prevent and detect prohibited content generation. This tightens the dependency on AI safety mechanisms and content filtering.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




