- Home
- / Insights
- / FTI Journal
The Evolving Impact of AI on the Insurance Industry
-
October 03, 2023
-
New forms of artificial intelligence promise greater efficiencies. But effective governance and testing remain key to preventing ethical and legal issues.
Insurance companies and regulators alike are learning quickly that artificial intelligence (“AI”) can lead to rapid advances in the industry — whether through predictive modeling, acceleration of new offerings, greater precision in market distribution or enhanced operational efficiencies. With new AI-enabled models and generative AI tools like ChatGPT emerging as mainstream options, insurers are also streamlining once-onerous manual tasks.1 Examples include leveraging machine learning and other AI capabilities to mine and process huge quantities of numerical and language-based data.
But even as these powerful, innovative tools point toward a new day of increased efficiency, lower costs and improved decision-making, firms and regulators alike are quickly discovering some of their drawbacks. For instance, using them in concert with externally curated data sets can introduce novel ethical and legal issues for protected classes of consumers. GAI software has already been shown to produce “AI hallucinations” — a phenomenon where the technology provides a convincing yet ultimately false answer to a query.2 Then there are accusations about the unlawful use of data to train AI models.3
Regulators Are Reviewing
Regulators at all levels and across all jurisdictions are watching with keen interest and heightened concern. With a focus on preventing inadvertently biased or discriminatory AI outcomes, the Federal Trade Commission, for one, issued guidance in 2021 stating that unfair or deceptive practices include racially biased algorithms.4, 5 In July 2023, New York City banned the use of AI and machine learning applications in hiring decisions unless an independent bias audit is performed before such tools are put to use.6
Last February, the Colorado Division of Insurance proposed a statewide set of rules governing the use of AI-based predictive models and algorithms for life insurance companies that utilize external consumer data.7 This draft regulation, the first of its kind in the nation, is nearing final adoption.8 Meanwhile, the state released a separate draft regulation on AI model testing for public review and comment.9 The Colorado Insurance Commissioner’s office has confirmed that both regulations will be extended beyond life insurance to property and casualty insurers in the not-too-distant future.10
Other state insurance departments are also gearing up locally, as is the National Association of Insurance Commissioners, the industry body responsible for standard-setting and regulatory guidance and support.11
An Accountability Gap
This rising regulatory pressure, as well as the ethical and societal implications of AI, create an underlying tension for insurers. As firms look to incorporate more of the new technology into their business models, they simultaneously must address an accountability gap where decisions are being made using models that are not fully understood or explainable. Given the ever-evolving nature of both the technology and today’s business landscape, that can be a tricky needle to thread.
Questions arise about the disparate impact that the technology may have on different groups of consumers. Specifically, how will AI affect legally protected classes of people? Will the underlying algorithms work as designed and intended? Several instances have occurred where AI models have displayed a programmatic bias that has resulted in unintended, adverse outcomes.
The moral and ethical responsibility to “get it right” is considerable and falls squarely on the shoulders of insurance company leaders. This, in turn, naturally raises the fundamental question: How to proceed?
Three Ways to Get AI Right
There is no single cure-all for potential AI-driven discrimination challenges. However, updating model and data governance, and adding independent bias testing, can help prevent some of the unintended adverse outcomes. What’s more, performing the appropriate level of due diligence can improve compliance with the rapidly emerging consumer protection regulations sprouting up in markets around the globe.
From an organizational level, the following three points represent best practices:
- Redefine the Approach: For companies that are using AI or want to employ more in their operations, effective data, modeling and decision-making governance will require the definition of new roles, responsibilities and processes that include senior company management and potentially even boards of directors. By redefining their approach and implementing material safeguards, insurers can build a foundation for how they interact cross-enterprise with AI and better predict and understand the unintended impacts of their models.
- Add Independent Assessments: Ultimately, companies will be called upon by regulators to show the impact, intended or not, of their AI. Independent assessment of models and data usage will enable insurance organizations to interact more seamlessly, efficiently and reliably with regulators and other interested parties.
- Measure Progress Regularly: Prevention and detection of unintended bias and discriminatory outcomes will require regular, ongoing testing and external reviews, as well as audits of data sources. Note that this is not a “one-and-done” deal. Instead, required organizations will need to undergo a radical shift in their operational mindsets if they aren’t already tracking their progress.
Insurers would be wise to proactively and reactively assess the usage and transparency of their AI solutions. Doing so can better position them to identify and remediate potential issues related to data privacy, model efficacy and bias, which in turn can create greater consumer confidence and loyalty to drive profitable growth.
A Glimpse at the Horizon
Little more than a decade ago, it would have been almost impossible to predict how extensive AI’s role would be in today’s insurance industry. Looking ahead, it’s fair to imagine that more refined AI models will arrive that may reduce some of the concerns currently being expressed. Alternatively, they may complicate things further.
Until then, insurance leaders who want to continue leveraging the wonder of AI will need to stay abreast of the latest regulatory rulings and guidance and apply best practices to avoid the negative financial and reputational consequences that increased regulatory scrutiny will undoubtedly present.12
Footnotes:
1: Ricard, Paul, Louisa Li, Alison Flint, and Randy Lampert. “Keeping Up with Generative AI.” OliverWyman, (Accessed September 20, 2023). https://www.oliverwyman.com/our-expertise/insights/2023/aug/how-insurers-can-successfully-use-generative-artificial-intelligence.html
2: “Generative AI Hallucinations: Why They Occur and How to Prevent Them.” Telus International, (July 6, 2023). https://www.telusinternational.com/insights/ai-data/article/generative-ai-hallucinations
3: Banchor, Komal, “Google Sued for Stealing User Data to Train AI: Everything Ever Created and Shared on the Internet.” MSN.com, (Accessed September 14, 2023). https://www.msn.com/en-us/news/technology/google-sued-for-stealing-user-data-to-train-ai-everything-ever-created-and-shared-on-the-internet/ar-AA1dOgZW
4: Jillison, Elisa. “Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI.” Federal Trade Commission, (April 19, 2021). https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai
5: Calvino, Claudio, Meloria Meschi and Dimitris Korres. “AI Assessment — Where Are We in 2022?” FTI Consulting, (March 23, 2022). https://www.fticonsulting.com/insights/articles/ai-assessment-2022
6: Kestenbaum, Jonathan. “NYC’s New AI Bias Law Broadly Impacts Hiring and Requires Audits.” Bloomberg Law, (July 5, 2023). https://news.bloomberglaw.com/us-law-week/nycs-new-ai-bias-law-broadly-impacts-hiring-and-requires-audits
7: “Colorado Prepares to Regulate AI and Big Data in Insurance.” The National Law Review, (Accessed September 20, 2023). https://www.natlawreview.com/article/colorado-prepares-to-regulate-ai-and-big-data-insurance
8: “Unfair Discrimination: Draft Proposed New Regulation.” Department of Regulatory Agencies, Division of Insurance.” https://drive.google.com/file/d/1AY5UJrU7B_SN3jP-7T-Jay803xp7gdAH/view
9: Fields, Carlton. “Colorado DOI Fast-Tracks Big Data Governance Rulemaking.” JD Supra, (August 7, 2023). https://www.jdsupra.com/legalnews/colorado-doi-fast-tracks-big-data-5018762/
10: Pattison-Gordon, Julie. “Colorado Aims to Prevent AI-Driven Discrimination in Insurance.” Government Technology, (April 19, 2023), https://www.govtech.com/policy/colorado-aims-to-prevent-ai-driven-discrimination-in-insurance.
11: “Artificial Intelligence.” NAIC: Center for Insurance and Policy and Research, (August 23, 2023). https://content.naic.org/cipr-topics/artificial-intelligence
12: Id.
© Copyright 2023. The views expressed herein are those of the author(s) and not necessarily the views of FTI Consulting, Inc., its management, its subsidiaries, its affiliates, or its other professionals.
About The Journal
The FTI Journal publication offers deep and engaging insights to contextualize the issues that matter, and explores topics that will impact the risks your business faces and its reputation.
Published
October 03, 2023
Key Contacts
Senior Managing Director
Senior Managing Director
Senior Managing Director
Senior Advisor