Insurance Day is part of Maritime Intelligence

This site is operated by a business or businesses owned by Maritime Insights & Intelligence Limited, registered in England and Wales with company number 13831625 and address c/o Hackwood Secretaries Limited, One Silk Street, London EC2Y 8HQ, United Kingdom. Lloyd’s List Intelligence is a trading name of Maritime Insights & Intelligence Limited. Lloyd’s is the registered trademark of the Society Incorporated by the Lloyd’s Act 1871 by the name of Lloyd’s.

This copy is for your personal, non-commercial use. For high-quality copies or electronic reprints for distribution to colleagues or customers, please call UK support +44 (0)20 3377 3996 / APAC support at +65 6508 2430

Printed By

UsernamePublicRestriction

Who is in charge? How to obey AI regulation

AI adoption in insurance is only likely to accelerate and insurers will need to remain responsive and alert as regulation and legal expectations evolve in parallel

Increasing reliance on artificial intelligence means greater legal and regulatory exposure

Artificial intelligence (AI) technologies, such as machine learning, natural language processing and automation, have become embedded across many areas of the insurance sector, with insurers increasingly relying on AI for functions such as automated claims handling, fraud detection, pricing risk and customer service through chatbots.

These tools offer clear benefits in terms of efficiency, cost reduction and enhanced customer experience. However, as insurers’ dependence on AI deepens, so too does their exposure to legal risk and regulatory scrutiny.

Traditionally, legal liability in insurance has been grounded in human decision-making, contractual obligations between the insurer and the insured and regulatory compliance. The introduction of AI complicates this by raising difficult questions about responsibility when AI-driven systems produce errors, unfair outcomes or breaches of duty. Potential areas of liability include biased algorithms, opaque or unexplained decisions, data security failures and reliance on inaccurate or incomplete datasets.

A central issue is the allocation of liability between insurers that deploy AI systems and the third-party providers that develop them. Jurisdictions worldwide are still developing approaches to this, particularly where AI systems operate with a degree of autonomy that limits predictability and direct control. Against this backdrop, insurers must actively assess their risk profiles and implement governance arrangements that are proportionate, transparent and robust.

 

Claims and litigation

The use of AI in claims assessment introduces new and evolving risks. For instance, an automated decision-making system may incorrectly reject a legitimate claim as a result of flawed algorithmic logic or biased training data, exposing the insurer to claims of unfair treatment or breach of contract. Reputational damage may also arise if customers perceive AI-driven outcomes as discriminatory, lack transparency about AI use or if they are denied meaningful routes for challenge and redress.

There are also internal risks linked to the AI systems’ heavy reliance on historical data. This can entrench existing inefficiencies or hidden biases within opaque “black box” decision-making processes. Equally, past performance does not guarantee future reliability, particularly where economic, social or environmental conditions change or the AI model itself experiences “drift” and results in poorer performance. As a result, the ongoing effectiveness of AI systems in new contexts can be difficult to assess with confidence.

In litigation, AI-generated outputs may be scrutinised, with parties challenging both their reliability and the legality of the underlying decision-making processes. Courts and regulators are increasingly focused on “explainability” – the ability to demonstrate how and why an AI system reached a particular conclusion – and on redress mechanisms. An inability to provide such explanations and redress processes may weaken an insurer’s legal position, attract regulatory sanctions and ultimately affect financial performance.

Increasing regulatory focus on AI in insurance is intensifying across the UK and Europe. The EU Artificial Intelligence Act, which entered into force on August 1, 2024, introduces a risk-based framework for AI regulation, imposing heightened obligations on high-risk uses, including certain insurance and credit-related applications. Insurers must also demonstrate compliance with data protection regimes such as the UK GDPR and EU GDPR, alongside broader requirements relating to fairness and transparency.

Importantly, compliance should be treated as an ongoing process rather than a box-ticking, one-off exercise. Regular audits of AI systems, strong data governance practices and clear customer communications about how AI is used, and how problematic outcomes are tracked, escalated and remediated, are essential. Taking a proactive and structured approach not only reduces legal and regulatory exposure but also supports trust in AI-enabled insurance products.

 

Practical steps for insurers

AI literacy and training: insurance professionals should develop a practical understanding of AI technologies and their implications. This includes training on ethical use, recognising bias and appreciating the limitations of automated decision-making as AI becomes further embedded in organisational processes.

Robust governance frameworks: insurers should establish clear policies covering the design, deployment and monitoring of AI systems. Accountability for AI oversight should be clearly assigned, with mechanisms for human review of AI-driven decisions and ongoing assessment of whether AI remains appropriate for specific tasks.

Documentation and explainability: comprehensive records should be maintained covering AI models, data sources, methodologies and decision logic. Insurers must be able to explain AI-driven outcomes to customers, courts and regulators, supported by clear evidence of compliance and proper use.

Data quality and management: data used to train and operate AI systems should be accurate, representative and regularly reviewed, updated and tested to minimise bias and data drift. Appropriate mitigation and correction measures should be applied where issues are identified.

Legal and regulatory monitoring: insurers should closely track developments in AI-related regulation and guidance, engaging with legal experts and industry bodies to anticipate change and adjust practices accordingly.

Incident response planning: clear procedures should be in place to respond to AI-related incidents, including incorrect claims decisions or data breaches. This should include communication strategies, remediation steps and structured engagement with regulators.

AI adoption in insurance will accelerate, driving innovation in underwriting, claims processing and customer service. As regulation and legal expectations evolve in parallel, insurers will need to remain responsive and alert. Strategic investment in AI literacy, governance and risk management will be key to unlocking AI’s benefits while minimising legal liabilities.

 

Megha Kumar is chief product officer and head of geopolitical risk at CyXcel

Related Content

Topics

UsernamePublicRestriction

Register

ID1155271

Ask The Analyst

Ask The Analyst - Ask Your Question Send your question to our team of expert analysts. You can: • Ask for background information on/explanation of articles in Insurance Day * • Find out more about our views on industry developments • Ask for an interpretation of market trends • Source supplementary data relating to articles • Request explanations to further your understanding of current issues (* This relates to any Insurance Day that is included as part of your subscription) We will do the research and get back to you personally with the information you need.

Your question has been successfully sent to the email address below and we will get back as soon as possible. my@email.address.

All fields are required.

Please make sure all fields are completed.

Please make sure you have filled out all fields

Please make sure you have filled out all fields

Please enter a valid e-mail address

Please enter a valid Phone Number

Ask your question to our analysts

Cancel