{"title":"Multidisciplinary collaboration on discrimination – not just “Nice to Have”","authors":"C. Dolman, Edward (Jed) Frees, Fei Huang","doi":"10.1017/S174849952100021X","DOIUrl":null,"url":null,"abstract":"Although much of the discipline of actuarial science has its roots in isolated mathematicians or small collaborative teams toiling to produce fundamental truths, practice today is frequently geared towards large collaborative teams. In some cases, these teams can cross academic disciplines. In our view, whilst certain matters can be effectively researched within isolated disciplines, others are more suited to multidisciplinary teamwork. Discrimination, particularly data-driven discrimination, is an extremely rich and broad topic. Here, we mainly focus on insurance discrimination in underwriting/pricing, and we use the word “discrimination” in an entirely neutral way, taking it to mean the act of treating distinct groups differently – whether or not such discrimination can be justified based on legal, economic or ethical grounds. Whilst narrow research into this subject is certainly possible, a broad perspective is likely to be beneficial in creating robust, well-considered solutions to actual or perceived problems. Significant harms can and, indeed, have been caused by well-intended but narrowly framed solutions to large, difficult problems. In discrimination, for example, the intuitively appealing “fairness through unawareness” is known to make overall discrimination worse in some circumstances (for a worked example, see Reid & O’Callaghan 2018). Whilst the unawareness problem has been understood in the computer science community for some time (see, e.g. Pedreschi et al. 2008), it is an idea still embedded in many laws around the world, and too frequently seen by some as a solution for data-driven discrimination. As with other institutions, insurers are redefining the way that they do business with the increasing capacity and computational abilities of computers, availability of new and innovative sources of data, and advanced artificial intelligence algorithms that can detect patterns in data that were previously unknown. Conceptually, Big Data and new technologies do not alter the fundamental issues of insurance discrimination; one can think of credit-based insurance scoring and price optimization as simply forerunners of this movement. Yet, old challenges may becomemore prominent in this rapidly developing landscape. Issues regarding privacy and the use of algorithmic proxies take on increased importance as insurers’ extensive use of data and computational abilities evolve. Actuaries need to be attuned to these issues and, ideally, involved in proposals to address them. For example, Frees & Huang (2021) draw upon historical, economic, legal, and computer science literatures to understand insurance discrimination. In particular, they review social and economic principles that can be used to assess whether insurance discrimination is ethical or is “unfair” and morally indefensible in some sense, examine insurance regulations and laws across different lines of business and jurisdictions, and explore the machine learning literature on mitigating proxy discrimination via algorithmic fairness. Taking advantage of the literature from","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":null,"pages":null},"PeriodicalIF":1.5000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Actuarial Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/S174849952100021X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"BUSINESS, FINANCE","Score":null,"Total":0}
引用次数: 1
Abstract
Although much of the discipline of actuarial science has its roots in isolated mathematicians or small collaborative teams toiling to produce fundamental truths, practice today is frequently geared towards large collaborative teams. In some cases, these teams can cross academic disciplines. In our view, whilst certain matters can be effectively researched within isolated disciplines, others are more suited to multidisciplinary teamwork. Discrimination, particularly data-driven discrimination, is an extremely rich and broad topic. Here, we mainly focus on insurance discrimination in underwriting/pricing, and we use the word “discrimination” in an entirely neutral way, taking it to mean the act of treating distinct groups differently – whether or not such discrimination can be justified based on legal, economic or ethical grounds. Whilst narrow research into this subject is certainly possible, a broad perspective is likely to be beneficial in creating robust, well-considered solutions to actual or perceived problems. Significant harms can and, indeed, have been caused by well-intended but narrowly framed solutions to large, difficult problems. In discrimination, for example, the intuitively appealing “fairness through unawareness” is known to make overall discrimination worse in some circumstances (for a worked example, see Reid & O’Callaghan 2018). Whilst the unawareness problem has been understood in the computer science community for some time (see, e.g. Pedreschi et al. 2008), it is an idea still embedded in many laws around the world, and too frequently seen by some as a solution for data-driven discrimination. As with other institutions, insurers are redefining the way that they do business with the increasing capacity and computational abilities of computers, availability of new and innovative sources of data, and advanced artificial intelligence algorithms that can detect patterns in data that were previously unknown. Conceptually, Big Data and new technologies do not alter the fundamental issues of insurance discrimination; one can think of credit-based insurance scoring and price optimization as simply forerunners of this movement. Yet, old challenges may becomemore prominent in this rapidly developing landscape. Issues regarding privacy and the use of algorithmic proxies take on increased importance as insurers’ extensive use of data and computational abilities evolve. Actuaries need to be attuned to these issues and, ideally, involved in proposals to address them. For example, Frees & Huang (2021) draw upon historical, economic, legal, and computer science literatures to understand insurance discrimination. In particular, they review social and economic principles that can be used to assess whether insurance discrimination is ethical or is “unfair” and morally indefensible in some sense, examine insurance regulations and laws across different lines of business and jurisdictions, and explore the machine learning literature on mitigating proxy discrimination via algorithmic fairness. Taking advantage of the literature from