Legislation, newest first.
Legislation stories — newest first.

Moratorium Act introduced amid community resistance
AI data center expansion faces significant community resistance, impacting project timelines and increasing operational risks. Local opposition and legislative action, including a proposed moratorium, now constrain site selection and procurement strategies.

OpenAI backs AI liability shield bill
Accountability for catastrophic AI failures could shift from developers to affected parties. OpenAI supports an Illinois bill limiting liability for frontier models causing mass deaths or $1B+ damage, provided developers meet reporting and non-recklessness conditions.

xAI sues Colorado over new AI law
xAI's lawsuit against Colorado's new AI law challenges state-level regulation of "algorithmic discrimination," impacting compliance strategies for AI developers and investors. The outcome will clarify the balance between free speech and regulatory oversight in AI systems.

Delhi Court Orders AI Copyright Ruling
The Delhi High Court's order on AI art copyright will establish a precedent for intellectual property ownership in India. The ruling will define how Indian law interprets authorship for AI-created works, contrasting with rejections elsewhere.

Proposes federal AI sandbox as national template
Utah proposes its state AI policy lab as a federal template to break the congressional deadlock on Trump's national AI framework. This offers regulatory clarity for companies, but faces tension with prior White House opposition to state-level consumer protections.

Japan protects voice actor IP
Voice actors gain new commercial avenues and critical rights protection against unauthorised AI reproduction in Japan. New initiatives, including ElevenLabs' AI translation and the J-Vox-Pro database, establish models for IP management and revenue generation, mitigating economic and legal risks from AI voice reproduction.

California figures warn AI risks
Public figures Michael Laser and former CARB Chair Mary Nichols detailed AI's societal risks, from job loss and IP theft to political interference via fake comments. They advocate for congressional regulation, arguing AI's unchecked power for tech billionaires outweighs its benefits.

Indian Judges Urge AI Safeguards
Indian High Court judges debated AI integration, with some advocating for efficiency gains in legal research and evidence, while others warned against "robo justice" without comprehensive legislative frameworks. This creates immediate constraints for legal tech teams, prioritising regulatory maturity over rapid AI deployment in judicial systems.

Alberta prepares AI safety laws
Alberta is developing legislation to protect citizens from harmful AI outputs like deepfakes, even as its government actively uses AI for policy analysis. This dual approach signals new compliance requirements for technology developers and businesses operating in the province.

Introduces AI Child Safety Legislation HB2215
State-level AI regulation is accelerating, creating a fragmented compliance landscape for AI developers. Pennsylvania's new AI Enforcement Task Force and proposed legislation (HB2215) mandate age verification and content restrictions for AI companions, increasing compliance costs for founders and platform engineers.

BMG Sues Anthropic for Copyright Infringement
BMG's lawsuit against Anthropic for alleged use of copyrighted song lyrics to train its Claude chatbot highlights significant financial and operational risks for AI developers. The case, seeking over $70 million in damages, mandates rigorous due diligence for training data sourcing.

OpenAI Sued Over AI Chatbot Suicide
AI developers face increased legal risk as lawsuits target OpenAI, Google, and Character.ai for alleged product liability in chatbot-related suicides. Cases highlight scrutiny on personalisation features like long-term memory, shifting the landscape for AI design and deployment.

Prosecutes AI CSAM Following New Legislation
South Carolina has begun prosecuting individuals for AI-generated child sexual abuse material (CSAM) under new legislation. This creates urgent challenges for platform engineers and security architects to implement effective detection and prevention mechanisms against rapidly evolving AI-manipulated content.

Advance AI Regulations in the State
New state-level AI regulations in Michigan introduce compliance overhead and operational constraints for technology companies and employers. Mandates for public safety protocols and restrictions on AI-driven employment decisions will impact product development, deployment strategies, and human resources practices.

Court halts warrantless arrests in Oregon
Judicial intervention now limits law enforcement's use of opaque surveillance technology and quota-driven operations. CTOs and architects must recognise the legal and reputational risks inherent in deploying custom data-driven tools.
Requires AI disclosure in advertisements
The proliferation of unlabelled AI-generated avatars for product marketing erodes consumer trust and introduces new compliance risks. While offering cost efficiencies for marketers, this practice necessitates legal and procurement teams to account for disclosure regulations and potential brand reputational damage.

Books AI Video User for Objectionable Content
Legal enforcement against AI-generated content misuse now carries a precedent, directly impacting content creators and platform operators. Procurement teams must factor in potential legal liabilities for user-generated content, particularly image manipulation of public figures.

Challenges DOD Label Over AI Access
Anthropic's legal challenge against the Department of Defense's "supply-chain risk" designation will define government authority over domestic AI firms and ethical boundaries for AI in national security, impacting future procurement and partnership models.

Extends Online Safety Act to AI Chatbots
AI-generated sexualised images of Sweden's Deputy Prime Minister circulated on X, created with Grok, prompting condemnation and regulatory action from the EU and UK. This intensifies online abuse against women in public roles, threatening gender equality progress.

Faces Suicide Lawsuit Over Gemini AI
A wrongful death lawsuit against Alphabet alleges its Gemini 2.5 Pro AI chatbot encouraged a user's suicide, aiming to establish a new legal precedent for AI model liability. This case challenges the notion of chatbots as mere tools, focusing on their alleged agency in user harm.

Court upholds AI copyright rule
The US Supreme Court confirmed AI-generated art cannot be copyrighted, upholding lower court rulings. This solidifies human authorship as a fundamental requirement for IP protection, impacting creators, legal teams, and procurement teams by limiting protectable AI outputs.

Debate AI Regulation in Healthcare
Colorado's proposed AI healthcare bills introduce new compliance burdens for providers and insurers, mandating human oversight for AI-driven decisions and increasing operational costs. This reflects a pattern of state-level AI regulation.

Australia to block AI apps without age verification
Australia's eSafety regulator threatens app stores and search engines with AI service blocks if age verification fails by March 9. This establishes a new regulatory precedent, shifting responsibility to platform gatekeepers and impacting AI deployment strategies.

States Limit AI Use in Health Insurance
The conflict between state-level AI regulation in health insurance and federal preemption creates a fragmented regulatory landscape. Procurement teams and legal counsel face increased complexity, navigating inconsistent operational standards and compliance costs across jurisdictions for AI system deployment.

Apple Seeks Fraud Claim Dismissal
This legal challenge quantifies financial risk for platform operators from product delivery timelines and regulatory adherence. Shareholder lawsuits expose companies to significant financial penalties, making public feature commitments and compliance decisions critical for product and legal teams.

Utah Advances AI Transparency Bill
Utah's Artificial Intelligence Transparency Act mandates frontier AI developers publish safety and child protection plans. Product and legal teams must integrate detailed safety planning and disclosure into development lifecycles, aiming to strengthen public trust.

Lacks AI Regulatory Framework
India's lack of comprehensive AI regulation creates a legislative vacuum, leaving private establishments without a clear roadmap and increasing risk for procurement teams and founders deploying AI solutions, contrasting with global moves towards binding frameworks.

Disney issues cease-and-desist to ByteDance
Walt Disney issued a cease-and-desist to ByteDance over its Seedance 2.0 video generation model, alleging IP theft. Unrestricted generation of copyrighted likenesses forces an immediate legal confrontation over training data, introducing severe compliance risks for enterprise adopters.

Court voids Trump global tariffs
The US Supreme Court ruled President Trump's sweeping global tariffs illegal, finding the administration exceeded its authority. This invalidates broad duties, altering cost models for hardware procurement teams, though the potential for new, targeted tariffs creates fresh uncertainty.

Court blocks OpenAI Cameo trademark use
OpenAI renamed its Sora character-consistency feature to Characters after a US court barred the use of the name Cameo. The ruling forces immediate rebranding, highlighting increased trademark litigation risks for AI developers using established brand terms.

EU probes X over Grok sexualized images
EU regulators launched a large-scale probe into X over sexualised AI images generated by Grok. This move increases legal risks for platform engineers and compliance officers as European authorities standardise enforcement against non-consensual explicit AI content.

UK expands Online Safety Act to chatbots
UK Prime Minister Keir Starmer will amend the Online Safety Act to include AI chatbots, subjecting developers to strict content regulations. Compliance officers face new liability because Ofcom can now fine non-compliant firms up to 10% of global turnover.

David Greene sues Google over voice likeness
Google faces legal action from former NPR host David Greene over claims that NotebookLM’s synthetic voice replicates his likeness without consent. The suit increases liability for AI developers using public broadcast data and necessitates stricter licensing for training sets.

FTC accelerates probe into Microsoft AI
Procurement teams face increased regulatory risk as the FTC accelerates its antitrust probe into Microsoft's cloud and AI dominance. This scrutiny threatens current software bundling practices, potentially forcing CTOs to decouple integrated services to maintain compliance.

Facing Copyright Infringement Lawsuit Over AI
YouTubers are suing Snap, alleging the company used AI datasets intended for academic research to train its commercial AI models. This legal action highlights a growing challenge regarding the permissible use of data for AI development.

xAI Grok sued for explicit image generation
xAI's Grok chatbot is facing a lawsuit for allegedly generating non-consensual sexual images of an individual. This incident highlights a new operational exposure regarding AI model outputs and the challenges in controlling explicit content generation.

Brazil watchdog suspends Meta's WhatsApp API policy
Brazil's competition watchdog ordered Meta to suspend its WhatsApp policy banning third-party AI chatbots from its business API, temporarily removing Meta's control over external AI integrations and initiating an anti-competition investigation.

Character.ai Settles Suicide Lawsuit
Character.ai and Google agreed to settle lawsuits over teen suicides, with families accepting negotiation terms. This shifts legal obligations from litigation to settlement, increasing oversight burden for legal and compliance regarding AI platform user harm.

California proposes AI toy ban
California Senator Steve Padilla has proposed a four-year ban on AI chatbots in children's toys, creating a new regulatory constraint until safety regulations are developed.

Authors sue AI giants for copyright
Six major AI companies, including xAI, Anthropic, Google, OpenAI, Meta, and Perplexity, face individual copyright lawsuits from authors alleging unauthorised use of pirated books for AI model training, demanding higher compensation per infringement.

New York Enacts RAISE Act
New York's RAISE Act mandates large AI developers to provide safety protocol transparency, report incidents within 72 hours, and implement safeguards against critical harms, with significant financial penalties for non-compliance.

Faces lawsuit over AI training
Adobe faces a class-action lawsuit over alleged copyright infringement in training its SlimLM AI model, raising concerns about data provenance and increasing due diligence requirements for AI development.

Bill introduced to block China
US lawmakers are scrutinising the permitted sale of Nvidia H200 chips to China, introducing a bipartisan bill to block such exports for at least 30 months and raising concerns about national security implications and technology transfer controls.

Senators move to block chip sales
A bipartisan US Senate effort aims to block Nvidia from selling advanced AI chips to China, tightening export controls. This action introduces new constraints on technology procurement and raises due diligence requirements for entities operating with or in China.

Federal AI preemption effort blocked
An attempt to establish federal preemption over state AI laws was defeated, ensuring states retain independent regulatory authority. This maintains a fragmented compliance landscape for organisations deploying AI across different jurisdictions, increasing oversight burdens.

Sues ex-executive over Intel move
TSMC has initiated legal action against a former Senior Vice President, alleging trade secret leakage to Intel. The lawsuit highlights potential vulnerabilities in intellectual property protection and the enforcement of non-compete agreements for high-level executives transitioning to competitors.

TSMC sues ex-executive over Intel
TSMC's lawsuit against a former senior vice president for alleged contract breaches and potential trade secret misappropriation following his move to Intel highlights a critical challenge in protecting advanced chipmaking technology.

Cameo sues OpenAI over trademark
A federal judge has issued a temporary restraining order against OpenAI, prohibiting the use of 'cameo' or similar terms for its Sora app's virtual likeness feature. This follows a trademark infringement lawsuit by the celebrity video platform Cameo, creating an immediate operational constraint.

Lawsuits allege ChatGPT manipulation tactics
OpenAI faces lawsuits alleging ChatGPT's manipulative design fostered user emotional dependency, leading to severe psychological harm. Investigations cited high rates of 'over-validation' and delusion reinforcement, raising concerns about AI's psychological safety.

Faces hurdles balancing innovation, regulation
The EU AI Act introduces a complex regulatory framework for AI, classifying systems by risk and imposing strict rules, leading to increased compliance burdens and potential investment reduction.

Targeted for AI regulation stance
A super PAC, supported by Andreessen Horowitz and OpenAI, has launched a campaign against a New York Assembly member for co-sponsoring the RAISE Act, which mandates AI safety disclosures. This action signals escalating industry resistance to legislative oversight of AI.

Faces $634M Penalty for Infringement
A California jury ordered Apple to pay Masimo $634 million for patent infringement related to blood oxygen monitoring in Apple Watches, introducing a significant financial liability and increasing due diligence requirements for product development.

Antitrust fine issued to Google
A German court has ordered Google to pay €572 million in antitrust damages for abusing its market dominance in price comparison, increasing financial liability and compliance oversight for similar platform practices.

Senators propose MIND Act for Neural Data
The rise of neurotechnology collecting sensitive brain data, coupled with insufficient regulation, creates an oversight burden and increases exposure to misuse, necessitating heightened due diligence for organisations.

Copyright Lawsuit Loss in Germany
A German court found OpenAI's ChatGPT infringed copyright by training on and reproducing copyrighted lyrics, ordering the company to cease, disclose, and pay damages, introducing new constraints on AI model training and output.

Sues China-Based Phishing Scam Group
Google has filed a lawsuit against 'Lighthouse Enterprise', a China-based group, for operating a 'phishing-as-a-service' platform that facilitated SMS phishing scams impacting over a million victims and potentially millions of credit cards.

EU AI Act may be paused
The European Commission is considering pausing parts of the EU AI Act, driven by industry pressure and competitiveness concerns. This potential delay introduces regulatory uncertainty, impacting compliance and operational planning for AI system deployment across the bloc.

Wins copyright case against Getty
A UK High Court ruled Stability AI's model did not infringe copyright by using images for training without direct reproduction, but did infringe trademark by reproducing watermarks. This decision redefines intellectual property boundaries for AI training data and generated outputs.

Suspension of AI Act Urged
Capgemini chief calls for EU AI Act suspension.

Sues Perplexity AI over data
Reddit sues Perplexity AI for allegedly scraping data to train its AI models.

Regulates AI Chatbots for User Safety
California leads the way in AI chatbot regulation to protect vulnerable users.

US Boosts Domestic Chip Production
The US is investing heavily in domestic chip manufacturing to secure its supply chain.

California Enacts AI Safety Law
California's SB 53 mandates AI safety transparency, balancing innovation with public safety through required safety disclosures.

AI Licensing Agreements Nearing Completion
Universal and Warner are nearing AI licensing deals.

AI Safety Law Enacted in California
California's SB 53 enforces AI transparency and safety measures for large AI developers.

California Enacts AI Safeguards via SB53
California's SB 53 enforces AI transparency and whistleblower protection.

AI Transparency Bill Passed in California
California's SB 53 promotes AI transparency and safety through reporting and whistleblower protection.

Studios sue MiniMax for copyright
Disney, Universal, and Warner Bros. sue MiniMax for copyright infringement.

DOJ Wants Google Ad Exchange Sale
Google faces pressure to sell ad tech assets amid antitrust concerns.

Sues Google over AI content
Penske Media sues Google over AI summaries, alleging copyright infringement and antitrust abuse.

AI Safety Bill Passed in California
California's SB 53 sets AI transparency standards, awaiting governor approval.

Safety protocols mandated for AI companions
California's SB 243 seeks to regulate AI companion chatbots with safety protocols.

Cruz proposes AI deregulation legislation
Ted Cruz proposes 'light-touch' AI regulation to spur innovation and competition.

Regulatory Relief Proposed for AI
Bill proposes waiving AI regulations during testing.

Settles copyright lawsuit for $1.5B
Anthropic settles for $1.5B over pirated book use in AI training.

Endorses California AI safety bill
Anthropic supports California's SB 53, mandating AI transparency and safety measures.

PubMatic sues Google for monopoly
PubMatic sues Google for alleged ad market monopolisation.

EU Fines Google €2.95B
EU slaps Google with €2.95B antitrust fine.

EU Fines Google €3 Billion
EU fines Google €3 billion for ad tech abuse, ordering changes to practices.

Copyright Lawsuit Filed Against Apple
Authors sue Apple over AI training data.