Ofcom, the UK's communications regulator, is preparing to audit the algorithms used by major tech companies to ensure they are protecting children online. Chief Executive Melanie Dawes has met with US AI firms to discuss the Online Safety Act, which mandates that platforms implement robust age verification systems and moderate content to prevent minors from accessing harmful material.
Under the Online Safety Act, Ofcom can impose substantial fines, up to 10% of a company's global annual revenue, for non-compliance. The regulator is concerned that algorithms on platforms like X, TikTok, and Reddit may expose young users to inappropriate content, including misinformation and explicit material. Ofcom's audits will assess the effectiveness of age verification systems and content moderation processes, requiring platforms to adjust algorithmic curation to limit risks to younger users.
Ofcom's efforts include assessing if tech firms have completed 'children's access assessment' to determine if children are likely to access their service. They will also need to complete a 'children's risk assessment' to assess the risk they pose to children. The regulator may also name and shame companies and potentially ban platforms for children if they fail to comply.
Related Articles

OpenAI Faces State Scrutiny
Read more about OpenAI Faces State Scrutiny →
AI Act Suspension Urged
Read more about AI Act Suspension Urged →
AI Faces Youth Safety Scrutiny
Read more about AI Faces Youth Safety Scrutiny →
AI Regulation: Nuclear Treaty Blueprint
Read more about AI Regulation: Nuclear Treaty Blueprint →
