Anthropic AI Deanonymizes Online Accounts

Anthropic AI Deanonymizes Online Accounts

8 March 2026

What happened

Anthropic and ETH Zurich researchers published a preprint demonstrating large language models (LLMs) can deanonymize online accounts at scale. The study, titled "Large-scale online deanonymization with LLMs," showed AI systems analyzed public text to extract identity signals, matching pseudonymous profiles to real individuals. Experiments linking Hacker News users to LinkedIn profiles and Reddit accounts achieved up to 68% recall with 90% precision, significantly outperforming traditional methods. Identifying an online account using this experimental pipeline costs an estimated $1 to $4 per profile.

Why it matters

Online anonymity is eroding as AI systems automate identity exposure, enabling large-scale investigations at $1-4 per profile. This shift weakens "practical obscurity," a fundamental protection for journalists, whistleblowers, and individuals discussing sensitive topics. Security architects and privacy officers must reassess existing safeguards, while founders of online communities face increased pressure to implement stronger platform protections against AI-powered deanonymisation.

AI generated content may differ from the original.

Published on 8 March 2026

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

Anthropic AI Deanonymizes Online Accounts