What happened
Australia's eSafety regulator warns it may compel app stores and search engines to block artificial intelligence services failing age verification by March 9, 2026. Only nine of 50 popular text-based AI products had rolled out or announced plans for age assurance systems. Services like OpenAI's ChatGPT must restrict Australians under 18 from pornography, extreme violence, self-harm, and eating disorder content, or face fines up to A$49.5 million ($35 million).
Why it matters
This establishes a new regulatory precedent for AI age verification, shifting responsibility to platform gatekeepers. Procurement teams face increased due diligence for AI services operating in Australia, requiring explicit age assurance mechanisms. Security architects must now account for potential service blocks by app stores and search engines, impacting access and deployment strategies for AI tools. This follows Australia's December ban on social media for teenagers, extending its regulatory approach to AI.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




