AiLiveAppeal 9.01 min read

OpenAI Violated Canadian Privacy Laws

6 May 2026By Pulse24 desk
← Back
Share →

What happened

OpenAI failed to comply with Canadian privacy laws in training its ChatGPT chatbot, a joint investigation by federal and provincial privacy watchdogs concluded. The probe, led by Federal Privacy Commissioner Philippe Dufresne and counterparts from British Columbia, Alberta, and Quebec, found OpenAI engaged in overly broad data collection, including sensitive personal details, without adequate transparency or consent. OpenAI also provided insufficient mechanisms for individuals to access, correct, or delete their personal information. The company has since agreed to implement measures, including publishing more privacy information, improving data export tools, and testing protections for children of public figures.

Why it matters

Global AI model deployment now faces heightened regulatory enforcement, mandating immediate review of data sourcing and user rights mechanisms. Legal teams must audit AI training data practices against national privacy laws, particularly regarding sensitive personal information and consent. This follows OpenAI's prior agreement to new safety measures with Canada in March, indicating a pattern of regulatory pressure on AI developers to enhance data governance. Founders launching AI products must prioritise transparent data collection and robust user control features to mitigate legal and reputational risks.

Source · sootoday.comAI-processed content may differ from the original.
Published 6 May 2026