California has passed SB 53, the Transparency in Frontier Artificial Intelligence Act, mandating AI safety transparency for major AI developers. The law requires companies exceeding 10^26 FLOPs of computing power and, for some provisions, those with over $500 million in annual revenue, to disclose and adhere to their safety protocols. These protocols should detail how they mitigate risks, such as cyberattacks and biological weapon creation.
The Act establishes mechanisms for reporting critical safety incidents to the California Office of Emergency Services and protects whistleblowers. It directs the California Department of Technology to annually update the law based on technological advancements and international standards. SB 53 aims to strike a balance between fostering innovation and ensuring public safety, with Encode AI's VP of Public Policy, Adam Billen, noting that the law ensures companies follow through on their safety commitments.
Most of SB 53's provisions will be effective from January 1, 2026. The law focuses on transparency and risk management rather than creating new liabilities for harm caused by AI systems. By requiring AI developers to demonstrate responsible practices, California seeks to lead in both technological innovation and safety.
Related Articles
California Enacts AI Safety Law
Read more about California Enacts AI Safety Law →California Passes AI Safety Bill
Read more about California Passes AI Safety Bill →California Enacts AI Safeguards
Read more about California Enacts AI Safeguards →California's SB 53: AI Oversight
Read more about California's SB 53: AI Oversight →