What happened
UK Prime Minister Keir Starmer announced plans to amend the Online Safety Act to include AI chatbots. The update subjects AI developers to the same illegal content and child safety requirements as social media platforms. This change follows a deepfake scandal involving X’s Grok AI. Ofcom gains powers to fine non-compliant AI firms up to 10% of global turnover. The move formalises regulatory oversight of generative AI outputs previously considered outside the Act’s primary scope.
Why it matters
Compliance officers and product architects must now treat generative AI outputs as regulated media. Because the Online Safety Act now covers chatbots, developers face mandatory content filtering and age-verification requirements. Failure to prevent deepfakes or harmful content results in fines up to 10% of global revenue. This move follows three regulatory actions against X and Grok since January 8. Therefore, UK market entry now requires higher safety engineering costs and increased legal liability for platform owners.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




