Insurance Day is part of Maritime Intelligence

This site is operated by a business or businesses owned by Maritime Insights & Intelligence Limited, registered in England and Wales with company number 13831625 and address c/o Hackwood Secretaries Limited, One Silk Street, London EC2Y 8HQ, United Kingdom. Lloyd’s List Intelligence is a trading name of Maritime Insights & Intelligence Limited. Lloyd’s is the registered trademark of the Society Incorporated by the Lloyd’s Act 1871 by the name of Lloyd’s.

This copy is for your personal, non-commercial use. For high-quality copies or electronic reprints for distribution to colleagues or customers, please call UK support +44 (0)20 3377 3996 / APAC support at +65 6508 2430

Printed By

UsernamePublicRestriction

Viewpoint: Responsible AI is an economic, technological and social necessity

The benefits to society, including the insurance industry, are clear but there is an increasing consensus about the uncertainties and latent risks that come with the use of artificial intelligence technologies

AI is as much about accountability as it is about intelligence

Artificial intelligence (AI) is set to revolutionise society, unlocking vast business and economic potential, with analysts hailing it as the defining technological advancement of the 21st century and its transformative effects already evident.

However, for AI to fulfil its potential in the insurance sector, accountability is paramount. Overall, AI’s pervasive influence extends from the consumer realm, where it powers our smartphones, home computers, technologies such as Alexa and increasingly autonomous vehicles, to the corporate landscape, where its adoption continues to increase.

Tech giants’ substantial investments in AI research and development are propelling the field forward, bolstering organisations’ confidence in AI’s role as a key driver of growth.

AI already has a multitude of usage cases in the insurance sector, including automating underwriting processes, enhancing pricing accuracy and improving customer service. On the claims front, predictive analytics are now being used to detect fraudulent claims and AI is being used to streamline claims management and payments

The emergence of generative AI technologies has further accelerated the impact of AI. A recent study by Boston Consulting Group, titled “The CEO’s Road­map on Generative AI”, anticipates the generative AI market to reach a staggering $121bn by 2027, with a compound annual growth rate of 68% between 2022 and 2027.

When focusing on generative AI, recent studies have quantified its economic potential. A McKinsey & Company report from June highlights generative AI has the potential to add trillions of dollars to the global economy. Specifically, it asserts generative AI can add the equivalent of $2.6trn to $4.4trn a year.

AI already has a multitude of usage cases in the insurance sector, including automating underwriting processes, enhancing pricing accuracy and improving customer service. On the claims front, predictive analytics are now being used to detect fraudulent claims and AI is being used to streamline claims management and payments.

Moreover, AI-driven telematics is promoting safer driving habits and personalised pricing for motor insurance policies, while AI-powered image analysis is expediting both motor and property damage assessment and the payment of insurance claims.

The benefits to society, including the insurance industry, are clear, but there is an increasing consensus among analysts, businesses, technology developers, regulators and society at large about uncertainties and latent risks that come with the use of these new AI technologies.

 

The imperative of responsible AI

Mapfre’s report, “Responsible artificial intelligence: reliable, safe, and sustainable technology to generate the economy of the future”, underscores the need for responsible AI usage at the executive level. There is a perception held by many that organisations possess control over their AI risks, although concerns also persist as a result of the lack of comprehensive regulations and practical guidelines.

McKinsey’s recent survey, “The State of AI in 2023: Generative AI’s breakout year”, aligns with these concerns, revealing while respondents acknowledge AI-related risks, few companies are adequately prepared to adopt generative AI or address its associated risks.

Effective risk mitigation is pivotal to avert economic, operational and reputational losses, protect privacy, prevent discrimination, maintain stability and ensure digital security. Thus, responsible AI practices must take precedence across organisations, regardless of size or industry.

Establishing this paradigm requires defining best practices, standards and services for risk assessment, monitoring and mitigation. As AI adoption surges and regulatory frameworks evolve, the demand for compliance intensifies, creating opportunities for third-party services to validate responsible AI usage.

 

Opportunity for insurers

There is an opportunity here as well for insurers to act both as safety nets and enablers of AI-driven projects and innovations, leading with ethical AI and responsible governance. Through research and responsible technology deployment, insurers can support their customers’ AI initiatives.

Generative AI, in particular, introduces new risks in the insurance sector, including the potential for biased or discriminatory algorithmic decisions that could lead to unfair pricing or claim denials, erosion of privacy as AI systems process and analyse vast amounts of customer data, increased susceptibility to fraudulent claims as AI-generated content becomes harder to distinguish from genuine information and a need for robust cyber protection measures to protect sensitive customer information, as AI systems can be vulnerable to attacks and exploitation.

Additionally, over-reliance on AI in underwriting and claims processing may reduce human oversight, potentially leading to errors that could harm policyholders and insurers alike, and there are regulatory and legal risks related to evolving regulations and compliance requirements as authorities seek to manage the ethical and operational challenges posed by AI in the insurance industry.

Addressing these concerns, responsible AI solutions have emerged, enabling bias detection, fairness, and compliance during AI development.

The insurance industry stands at the threshold of an AI-driven transformation, which is poised to unlock substantial economic value. Nevertheless, responsible AI practices and effective risk mitigation are imperative for sustainable progress, safeguarding against potential pitfalls and harnessing AI’s vast potential for the insurance sector and the broader economy.

 

Bárbara Fernández is deputy director of Mapfre Open Innovation and head of Insur_space at Mapfre

Related Content

Topics

UsernamePublicRestriction

Register

ID1145556

Ask The Analyst

Ask The Analyst - Ask Your Question Send your question to our team of expert analysts. You can: • Ask for background information on/explanation of articles in Insurance Day * • Find out more about our views on industry developments • Ask for an interpretation of market trends • Source supplementary data relating to articles • Request explanations to further your understanding of current issues (* This relates to any Insurance Day that is included as part of your subscription) We will do the research and get back to you personally with the information you need.

Your question has been successfully sent to the email address below and we will get back as soon as possible. my@email.address.

All fields are required.

Please make sure all fields are completed.

Please make sure you have filled out all fields

Please make sure you have filled out all fields

Please enter a valid e-mail address

Please enter a valid Phone Number

Ask your question to our analysts

Cancel