AicopyrightLiveAppeal 9.01 min read

Anthropic Leak Exposes Code Ownership Risk

29 April 2026By Pulse24 desk
← Back
Share →

What happened

Anthropic accidentally published 512,000 lines of Claude Code’s source on March 31, 2026, through a missing configuration file. The codebase was rapidly mirrored across GitHub, with an AI-rewritten version, “claw-code,” quickly gaining 100,000 stars. Anthropic issued DMCA takedowns, prompting questions about AI-generated code copyright. The US Copyright Office confirmed in January 2025 that only human-created work is copyrightable, a stance upheld by the Supreme Court’s March 2026 rejection of the Thaler appeal, establishing “meaningful human authorship” as a requirement.

Why it matters

Uncertainty over AI-generated code ownership creates significant legal and commercial risks for development teams and founders. Code lacking “meaningful human authorship” may not be copyrightable, leaving it vulnerable to copying without legal recourse. Employment contracts’ IP clauses can extend employer claims to AI-assisted side projects, even if built on personal time. Procurement teams must scrutinise AI tool licences for open-source contamination risks, while security architects face challenges protecting uncopyrightable assets. This follows recent concerns about AI clone copyright claims exploiting system vulnerabilities.

Source · legallayer.substack.comAI-processed content may differ from the original.
Published 29 April 2026