What happened
OpenAI identified Jesse Van Rootselaar's ChatGPT account in June 2025 for "furtherance of violent activities" via abuse detection, banning it for policy violation. The company considered but did not refer the account to the Royal Canadian Mounted Police (RCMP), stating activity did not meet its "imminent and credible risk" threshold for law enforcement contact. Eight months later, in February 2026, Van Rootselaar killed eight people in a British Columbia school shooting before dying from a self-inflicted gunshot. OpenAI then contacted the RCMP with information on the individual and their ChatGPT use.
Why it matters
AI platform providers face increased scrutiny over internal safety protocols and reporting thresholds for violent content. OpenAI's "imminent and credible risk" standard for law enforcement referral proved insufficient to prevent a real-world tragedy, creating a gap in proactive threat mitigation. Legal teams, product safety leads, and security architects developing AI platforms must review abuse detection and reporting policies. Current AI platform safety thresholds do not align with public safety expectations; prioritise clear, actionable reporting mechanisms for identified violent intent.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




