Several major insurance firms, including AIG, Great American, and WR Berkley, are seeking to limit their liability related to AI agents and chatbots. This move comes as insurers recognise the potential for substantial financial risk arising from the increasing use of AI in customer service, advice, and other operational areas. The primary concern revolves around AI-generated errors, regulatory compliance, and the performance of AI models, particularly in areas where inaccurate information or advice could lead to significant financial harm.
Traditional insurance policies, such as cyber and errors and omissions (E&O) policies, may not adequately cover the unique risks posed by AI, such as model failures, hallucinations, or regulatory non-compliance. Insurers are now working to clarify policy language and introduce AI-specific endorsements to either include or exclude AI-related risks. This includes defining what constitutes an 'AI event' and addressing issues like data poisoning, prompt injection, and biased decisions. The evolving regulatory landscape for AI, including the EU AI Act and emerging laws in regions like California and New York, adds further complexity, requiring companies to maintain detailed documentation and implement safeguards.
As businesses increasingly rely on AI chatbots and virtual assistants, the need for tailored insurance coverage becomes more critical. Insurers are exploring ways to quantify AI risk, assess model performance, and address potential gaps in existing policies. This includes considering factors such as the use of third-party AI systems, data provenance, and the implementation of human-in-the-loop controls. The goal is to provide clarity on coverage triggers and ensure that policies accurately reflect the risks associated with AI technologies.




