What happened
Microsoft confirmed an Office software bug allowed Copilot AI to access and summarise confidential emails from paying customers. This flaw bypassed data protection policies intended to isolate sensitive information. The breach affected enterprise users relying on Copilot for automated summaries. Microsoft identified the vulnerability after customers reported unauthorised data surfacing in AI outputs. The company patched the flaw to restore data isolation boundaries across the Office suite.
Why it matters
Security architects and compliance officers face immediate data leakage risks because internal isolation protocols failed. This breach proves that existing data protection policies cannot reliably prevent AI models from ingesting restricted content. Therefore, procurement teams must verify technical isolation mechanisms before expanding AI deployments. This incident follows the January exposure of US Government documents to ChatGPT, marking a pattern of systemic failure in AI data boundaries. Resulting leaks compromise legal privilege.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




