When Your Only Tool Is A Hammer: Ethical Limitations of Algorithmic Fairness Solutions in Healthcare Machine Learning

M. Mccradden, M. Mazwi, Shalmali Joshi, James A. Anderson
{"title":"When Your Only Tool Is A Hammer: Ethical Limitations of Algorithmic Fairness Solutions in Healthcare Machine Learning","authors":"M. Mccradden, M. Mazwi, Shalmali Joshi, James A. Anderson","doi":"10.1145/3375627.3375824","DOIUrl":null,"url":null,"abstract":"It is no longer a hypothetical worry that artificial intelligence - more specifically, machine learning (ML) - can propagate the effects of pernicious bias in healthcare. To address these problems, some have proposed the development of 'algorithmic fairness' solutions. The primary goal of these solutions is to constrain the effect of pernicious bias with respect to a given outcome of interest as a function of one's protected identity (i.e., characteristics generally protected by civil or human rights legislation. The technical limitations of these solutions have been well-characterized. Ethically, the problematic implication - of developers, potentially, and end users - is that by virtue of algorithmic fairness solutions a model can be rendered 'objective' (i.e., free from the influence of pernicious bias). The ostensible neutrality of these solutions may unintentionally prompt new consequences for vulnerable groups by obscuring downstream problems due to the persistence of real-world bias. The main epistemic limitation of algorithmic fairness is that it assumes the relationship between the extent of bias's impact on a given health outcome and one's protected identity is mathematically quantifiable. The reality is that social and structural factors confluence in complex and unknown ways to produce health inequalities. Some of these are biologic in nature, and differences like these are directly relevant to predicting a health event and should be incorporated into the model's design. Others are reflective of prejudice, lack of access to healthcare, or implicit bias. Sometimes, there may be a combination. With respect to any specific task, it is difficult to untangle the complex relationships between potentially influential factors and which ones are 'fair' and which are not to inform their inclusion or mitigation in the model's design.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3375627.3375824","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

It is no longer a hypothetical worry that artificial intelligence - more specifically, machine learning (ML) - can propagate the effects of pernicious bias in healthcare. To address these problems, some have proposed the development of 'algorithmic fairness' solutions. The primary goal of these solutions is to constrain the effect of pernicious bias with respect to a given outcome of interest as a function of one's protected identity (i.e., characteristics generally protected by civil or human rights legislation. The technical limitations of these solutions have been well-characterized. Ethically, the problematic implication - of developers, potentially, and end users - is that by virtue of algorithmic fairness solutions a model can be rendered 'objective' (i.e., free from the influence of pernicious bias). The ostensible neutrality of these solutions may unintentionally prompt new consequences for vulnerable groups by obscuring downstream problems due to the persistence of real-world bias. The main epistemic limitation of algorithmic fairness is that it assumes the relationship between the extent of bias's impact on a given health outcome and one's protected identity is mathematically quantifiable. The reality is that social and structural factors confluence in complex and unknown ways to produce health inequalities. Some of these are biologic in nature, and differences like these are directly relevant to predicting a health event and should be incorporated into the model's design. Others are reflective of prejudice, lack of access to healthcare, or implicit bias. Sometimes, there may be a combination. With respect to any specific task, it is difficult to untangle the complex relationships between potentially influential factors and which ones are 'fair' and which are not to inform their inclusion or mitigation in the model's design.
当你唯一的工具是一把锤子:医疗机器学习中算法公平解决方案的伦理限制
人工智能——更具体地说,机器学习(ML)——会在医疗保健领域传播有害偏见的影响,这不再是一种假设的担忧。为了解决这些问题,一些人提出了“算法公平”解决方案的发展。这些解决办法的主要目标是限制有害偏见对特定利益结果的影响,使其成为受保护身份(即一般受民事或人权立法保护的特征)的功能。这些解决方案的技术限制已经得到了很好的描述。从道德上讲,问题的含义-开发人员,潜在的和最终用户-是由于算法公平解决方案,一个模型可以呈现“客观”(即,不受有害偏见的影响)。这些解决方案表面上的中立性可能会无意中给弱势群体带来新的后果,因为它们掩盖了由于现实世界偏见持续存在而导致的下游问题。算法公平性的主要认知限制是,它假设偏见对给定健康结果的影响程度与个人受保护身份之间的关系在数学上是可量化的。现实情况是,社会和结构因素以复杂和未知的方式汇合在一起,造成健康不平等。其中一些是生物学性质的,而这些差异与预测健康事件直接相关,应该纳入模型的设计中。其他则反映了偏见、缺乏获得医疗保健的机会或隐性偏见。有时,两者兼而有之。就任何具体任务而言,很难理清潜在影响因素之间的复杂关系,以及哪些因素是"公平的",哪些是不公平的,以便在模型设计中纳入或减轻这些因素。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信