What happened
OpenAI supports Illinois Senate Bill 3444, an "Artificial Intelligence Safety Act" limiting liability for "frontier model" developers in cases of "critical harms". The bill defines frontier models as AI systems trained with over $100 million in computational costs, potentially including OpenAI, Google, xAI, Anthropic, and Meta. Developers would not be liable for incidents causing mass deaths (100+ people) or over $1 billion in property damage if they did not intentionally or recklessly cause harm and published safety, security, and transparency reports. OpenAI spokesperson Jamie Radice stated support for reducing serious harm risk and avoiding inconsistent state regulations.
Why it matters
This legislative push could shift accountability for catastrophic AI failures from developers to affected parties, impacting procurement teams and legal architects. The mechanism provides a liability shield for "critical harms" from frontier models, defined by a $100 million compute cost threshold, if developers meet reporting and non-recklessness conditions. This move, following other state-level AI regulation discussions, could establish a precedent for developer responsibility, potentially standardising legal exposure for severe, unintended consequences from advanced AI systems.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




