Anthropic has endorsed California's SB 53, a bill designed to regulate powerful AI systems. The bill mandates that companies developing advanced AI models must create and publish safety frameworks detailing how they manage and mitigate potential catastrophic risks. These companies would also be required to release public transparency reports summarising their risk assessments and safety measures before deploying new models.
SB 53 also includes provisions for reporting critical safety incidents to the state within 15 days and offering whistleblower protection to employees who report potential risks. The bill determines which AI systems to regulate based on the computing power used to train them, with the current threshold set at 10^26 FLOPS. Anthropic believes that while federal regulation is preferable, SB 53 is a necessary step to ensure AI safety, especially given the rapid pace of AI advancements.
The endorsement comes amid pushback from Silicon Valley and the federal government against AI safety efforts. SB 53 aims to create a level playing field where transparency about AI capabilities that pose risks to public safety is mandatory, not optional. Senator Scott Wiener significantly amended SB 53 after negotiations with tech groups, including OpenAI, to address concerns about which companies should be included and how international AI safety frameworks should be considered.