AI-Enabled Underwriting Brings New Challenges for Life Insurance: Policy and Regulatory Considerations

Azish Filabi, Sophia Duffy
{"title":"AI-Enabled Underwriting Brings New Challenges for Life Insurance: Policy and Regulatory Considerations","authors":"Azish Filabi, Sophia Duffy","doi":"10.52227/25114.2021","DOIUrl":null,"url":null,"abstract":"Insurers are increasingly using novel data sources and automated systems for risk classification and underwriting. Automation has improved operational efficiencies in the accuracy and speed of underwriting, but it also raises new considerations relating to unfair discrimination. In this paper, we review the current regulatory structures relating to unfair discrimination and suggest they are insufficient to police the myriad new big data sources available. Moreover, AI-enabled systems increase the risk of unfair discrimination if a facially neutral factor is utilized by an automated system as a proxy for a prohibited characteristic. Furthermore, many insurers rely on unregulated third-party algorithm developers, and therefore do not own and may not have access to the logic embedded in the system, which raises unique ethical implications, particularly with respect to accountability among AI actors.\n\nTo address these issues, we propose a framework that consists of three parts: (a) the establishment of national standards to serve as guardrails for acceptable design and behavior of AI-enabled systems; (b) a certification system that attests that an AI-enabled system was developed in accordance with those standards; and (c) periodic audits of\nthe systems’ output to ensure it operated consistent with those standards. The framework rests on the existing state-based regulatory infrastructure and envisions a self-regulatory organization who can work with the NAIC to develop standards and oversee certification and audit processes. Regulatory enforcement remains with the states. Part I describes the use of technology in life insurance underwriting. Part II discusses the unfair discrimination that can occur due to factors that reflect societal biases, and the unfair discrimination that could occur in artificially intelligent systems if facially neutral factors are substituted by the system for prohibited factors. The current industry standards and regulatory scheme for unfair discrimination in underwriting is also discussed in Part II. Part III describes the ethical concerns regarding accountability when third-party data inputs and underwriting systems are utilized. In Part IV, we propose a governance approach and framework to address these concerns.","PeriodicalId":261634,"journal":{"name":"Journal of Insurance Regulation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Insurance Regulation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.52227/25114.2021","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Insurers are increasingly using novel data sources and automated systems for risk classification and underwriting. Automation has improved operational efficiencies in the accuracy and speed of underwriting, but it also raises new considerations relating to unfair discrimination. In this paper, we review the current regulatory structures relating to unfair discrimination and suggest they are insufficient to police the myriad new big data sources available. Moreover, AI-enabled systems increase the risk of unfair discrimination if a facially neutral factor is utilized by an automated system as a proxy for a prohibited characteristic. Furthermore, many insurers rely on unregulated third-party algorithm developers, and therefore do not own and may not have access to the logic embedded in the system, which raises unique ethical implications, particularly with respect to accountability among AI actors. To address these issues, we propose a framework that consists of three parts: (a) the establishment of national standards to serve as guardrails for acceptable design and behavior of AI-enabled systems; (b) a certification system that attests that an AI-enabled system was developed in accordance with those standards; and (c) periodic audits of the systems’ output to ensure it operated consistent with those standards. The framework rests on the existing state-based regulatory infrastructure and envisions a self-regulatory organization who can work with the NAIC to develop standards and oversee certification and audit processes. Regulatory enforcement remains with the states. Part I describes the use of technology in life insurance underwriting. Part II discusses the unfair discrimination that can occur due to factors that reflect societal biases, and the unfair discrimination that could occur in artificially intelligent systems if facially neutral factors are substituted by the system for prohibited factors. The current industry standards and regulatory scheme for unfair discrimination in underwriting is also discussed in Part II. Part III describes the ethical concerns regarding accountability when third-party data inputs and underwriting systems are utilized. In Part IV, we propose a governance approach and framework to address these concerns.
人工智能承保为寿险带来新挑战:政策和监管考虑
保险公司越来越多地使用新的数据源和自动化系统进行风险分类和承保。自动化在承保的准确性和速度方面提高了操作效率,但它也引起了与不公平歧视有关的新考虑。在本文中,我们回顾了当前与不公平歧视相关的监管结构,并认为它们不足以监管无数新的大数据源。此外,如果自动化系统利用面部中性因素作为被禁止的特征的代理,人工智能支持的系统会增加不公平歧视的风险。此外,许多保险公司依赖于不受监管的第三方算法开发人员,因此不拥有也可能无法访问系统中嵌入的逻辑,这引发了独特的伦理影响,特别是在人工智能参与者之间的问责制方面。为了解决这些问题,我们提出了一个由三部分组成的框架:(a)建立国家标准,作为人工智能支持系统可接受的设计和行为的护栏;(b)认证系统,证明启用人工智能的系统是按照这些标准开发的;(c)对系统输出进行定期审计,以确保其符合这些标准。该框架以现有的基于州的监管基础设施为基础,并设想了一个可以与NAIC合作制定标准并监督认证和审计流程的自我监管组织。监管执法仍由各州负责。第一部分描述了技术在寿险承保中的应用。第二部分讨论了由于反映社会偏见的因素可能发生的不公平歧视,以及如果系统用表面中立的因素代替被禁止的因素,人工智能系统中可能发生的不公平歧视。第二部分还讨论了承保中不公平歧视的现行行业标准和监管方案。第三部分描述了当使用第三方数据输入和承保系统时,有关问责制的道德问题。在第四部分中,我们提出了一种治理方法和框架来解决这些问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信