What happened
Anthropic agreed to a $1.5 billion class-action settlement with half a million authors for downloading pirated books to train its AI models. The claims process, administered by a third party, has proven unreliable; authors like Maureen Johnson reported glitches and repeated submission failures when attempting to collect their share. This settlement, which goes to a fairness hearing on May 14, addresses Anthropic's use of pirated data, despite the company stating it did not use pirated works for publicly released AI technologies.
Why it matters
The flawed claims process for Anthropic's $1.5 billion copyright settlement introduces significant operational risk for legal and compliance teams managing large-scale AI training data liabilities. This mechanism demonstrates that even with a legal resolution, the practical execution of compensation can fail, impacting brand reputation and increasing administrative overhead for future settlements. Procurement teams must scrutinise vendor data sourcing and claims administration capabilities, especially as similar lawsuits are pending against Meta and OpenAI.



