Bias amplification to facilitate the systematic evaluation of bias mitigation methods.

IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Alexis Burgon, Yuhang Zhang, Nicholas Petrick, Berkman Sahiner, Kenny H Cha, Ravi K Samala
{"title":"Bias amplification to facilitate the systematic evaluation of bias mitigation methods.","authors":"Alexis Burgon, Yuhang Zhang, Nicholas Petrick, Berkman Sahiner, Kenny H Cha, Ravi K Samala","doi":"10.1109/JBHI.2024.3491946","DOIUrl":null,"url":null,"abstract":"<p><p>The future of artificial intelligence (AI) safety is expected to include bias mitigation methods from development to application. The complexity and integration of these methods could grow in conjunction with advances in AI and human-AI interactions. Numerous methods are being proposed to mitigate bias, but without a structured way to compare their strengths and weaknesses. In this work, we present two approaches to systematically amplify subgroup performance bias. These approaches allow for the evaluation and comparison of the effectiveness of bias mitigation methods on AI models by varying the degrees of bias, and can be applied to any classification model. We used these approaches to compare four off-the-shelf bias mitigation methods. Both amplification approaches promote the development of learning shortcuts in which the model forms associations between patient attributes and AI output. We demonstrate these approaches in a case study, evaluating bias in the determination of COVID status from chest x-rays. The maximum achieved increase in performance bias, measured as a difference in predicted prevalence, was 72% and 32% for bias between subgroups related to patient sex and race, respectively. These changes in predicted prevalence were not accompanied by substantial changes in the differences in subgroup area under the receiver operating characteristic curves, indicating that the increased bias is due to the formation of learning shortcuts, not a difference in ability to distinguish positive and negative patients between subgroups.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7000,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Biomedical and Health Informatics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/JBHI.2024.3491946","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

The future of artificial intelligence (AI) safety is expected to include bias mitigation methods from development to application. The complexity and integration of these methods could grow in conjunction with advances in AI and human-AI interactions. Numerous methods are being proposed to mitigate bias, but without a structured way to compare their strengths and weaknesses. In this work, we present two approaches to systematically amplify subgroup performance bias. These approaches allow for the evaluation and comparison of the effectiveness of bias mitigation methods on AI models by varying the degrees of bias, and can be applied to any classification model. We used these approaches to compare four off-the-shelf bias mitigation methods. Both amplification approaches promote the development of learning shortcuts in which the model forms associations between patient attributes and AI output. We demonstrate these approaches in a case study, evaluating bias in the determination of COVID status from chest x-rays. The maximum achieved increase in performance bias, measured as a difference in predicted prevalence, was 72% and 32% for bias between subgroups related to patient sex and race, respectively. These changes in predicted prevalence were not accompanied by substantial changes in the differences in subgroup area under the receiver operating characteristic curves, indicating that the increased bias is due to the formation of learning shortcuts, not a difference in ability to distinguish positive and negative patients between subgroups.

偏差放大,以促进对偏差缓解方法的系统评估。
人工智能(AI)安全的未来预计将包括从开发到应用的偏差缓解方法。随着人工智能和人机交互技术的进步,这些方法的复杂性和集成度也会随之提高。目前,人们提出了许多减轻偏差的方法,但却没有一种结构化的方法来比较它们的优缺点。在这项工作中,我们提出了两种系统放大亚组性能偏差的方法。这些方法可以通过改变偏差程度来评估和比较减轻偏差方法对人工智能模型的有效性,并可应用于任何分类模型。我们使用这些方法对四种现成的偏差缓解方法进行了比较。这两种放大方法都能促进学习捷径的发展,使模型在患者属性和人工智能输出之间形成关联。我们在一个案例研究中演示了这些方法,评估了根据胸部 X 光片确定 COVID 状态的偏差。以预测患病率的差异来衡量,在与患者性别和种族相关的亚组之间,性能偏差的最大增幅分别为 72% 和 32%。预测患病率的这些变化并没有伴随着接受者操作特征曲线下亚组面积差异的实质性变化,这表明偏差的增加是由于学习捷径的形成,而不是亚组之间区分阳性和阴性患者能力的差异。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Journal of Biomedical and Health Informatics
IEEE Journal of Biomedical and Health Informatics COMPUTER SCIENCE, INFORMATION SYSTEMS-COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
CiteScore
13.60
自引率
6.50%
发文量
1151
期刊介绍: IEEE Journal of Biomedical and Health Informatics publishes original papers presenting recent advances where information and communication technologies intersect with health, healthcare, life sciences, and biomedicine. Topics include acquisition, transmission, storage, retrieval, management, and analysis of biomedical and health information. The journal covers applications of information technologies in healthcare, patient monitoring, preventive care, early disease diagnosis, therapy discovery, and personalized treatment protocols. It explores electronic medical and health records, clinical information systems, decision support systems, medical and biological imaging informatics, wearable systems, body area/sensor networks, and more. Integration-related topics like interoperability, evidence-based medicine, and secure patient data are also addressed.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信