Understanding and Mitigating Bias in Imaging Artificial Intelligence

IF 5.2 1区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Radiographics Pub Date : 2024-04-18 DOI:10.1148/rg.230067
Ali S. Tejani, Yee Seng Ng, Yin Xi, Jesse C. Rayan
{"title":"Understanding and Mitigating Bias in Imaging Artificial Intelligence","authors":"Ali S. Tejani, Yee Seng Ng, Yin Xi, Jesse C. Rayan","doi":"10.1148/rg.230067","DOIUrl":null,"url":null,"abstract":"<p>Artificial intelligence (AI) algorithms are prone to bias at multiple stages of model development, with potential for exacerbating health disparities. However, bias in imaging AI is a complex topic that encompasses multiple coexisting definitions. <i>Bias</i> may refer to unequal preference to a person or group owing to preexisting attitudes or beliefs, either intentional or unintentional. However, <i>cognitive bias</i> refers to systematic deviation from objective judgment due to reliance on heuristics, and <i>statistical bias</i> refers to differences between true and expected values, commonly manifesting as systematic error in model prediction (ie, a model with output unrepresentative of real-world conditions). Clinical decisions informed by biased models may lead to patient harm due to action on inaccurate AI results or exacerbate health inequities due to differing performance among patient populations. However, while inequitable bias can harm patients in this context, a mindful approach leveraging equitable bias can address underrepresentation of minority groups or rare diseases. Radiologists should also be aware of bias after AI deployment such as automation bias, or a tendency to agree with automated decisions despite contrary evidence. Understanding common sources of imaging AI bias and the consequences of using biased models can guide preventive measures to mitigate its impact. Accordingly, the authors focus on sources of bias at stages along the imaging machine learning life cycle, attempting to simplify potentially intimidating technical terminology for general radiologists using AI tools in practice or collaborating with data scientists and engineers for AI tool development. The authors review definitions of bias in AI, describe common sources of bias, and present recommendations to guide quality control measures to mitigate the impact of bias in imaging AI. Understanding the terms featured in this article will enable a proactive approach to identifying and mitigating bias in imaging AI.</p><p>Published under a CC BY 4.0 license.</p><p>Test Your Knowledge questions for this article are available in the supplemental material.</p><p>See the invited commentary by Rouzrokh and Erickson in this issue.</p>","PeriodicalId":54512,"journal":{"name":"Radiographics","volume":"50 1","pages":""},"PeriodicalIF":5.2000,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiographics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1148/rg.230067","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial intelligence (AI) algorithms are prone to bias at multiple stages of model development, with potential for exacerbating health disparities. However, bias in imaging AI is a complex topic that encompasses multiple coexisting definitions. Bias may refer to unequal preference to a person or group owing to preexisting attitudes or beliefs, either intentional or unintentional. However, cognitive bias refers to systematic deviation from objective judgment due to reliance on heuristics, and statistical bias refers to differences between true and expected values, commonly manifesting as systematic error in model prediction (ie, a model with output unrepresentative of real-world conditions). Clinical decisions informed by biased models may lead to patient harm due to action on inaccurate AI results or exacerbate health inequities due to differing performance among patient populations. However, while inequitable bias can harm patients in this context, a mindful approach leveraging equitable bias can address underrepresentation of minority groups or rare diseases. Radiologists should also be aware of bias after AI deployment such as automation bias, or a tendency to agree with automated decisions despite contrary evidence. Understanding common sources of imaging AI bias and the consequences of using biased models can guide preventive measures to mitigate its impact. Accordingly, the authors focus on sources of bias at stages along the imaging machine learning life cycle, attempting to simplify potentially intimidating technical terminology for general radiologists using AI tools in practice or collaborating with data scientists and engineers for AI tool development. The authors review definitions of bias in AI, describe common sources of bias, and present recommendations to guide quality control measures to mitigate the impact of bias in imaging AI. Understanding the terms featured in this article will enable a proactive approach to identifying and mitigating bias in imaging AI.

Published under a CC BY 4.0 license.

Test Your Knowledge questions for this article are available in the supplemental material.

See the invited commentary by Rouzrokh and Erickson in this issue.

了解并减少成像人工智能中的偏差
人工智能(AI)算法在模型开发的多个阶段都容易出现偏差,有可能加剧健康差异。然而,成像人工智能中的偏见是一个复杂的话题,包含多种并存的定义。偏见可能是指由于事先存在的态度或信念,有意或无意地对某个人或群体的不平等偏好。然而,认知偏差指的是由于依赖启发式方法而系统性地偏离客观判断,统计偏差指的是真实值与预期值之间的差异,通常表现为模型预测中的系统误差(即模型的输出不能代表真实世界的情况)。根据有偏差的模型做出的临床决策,可能会因为根据不准确的人工智能结果采取行动而导致对患者的伤害,或者由于患者群体之间的表现不同而加剧健康不公平。不过,虽然在这种情况下不公平的偏见可能会伤害患者,但利用公平偏见的谨慎方法可以解决少数群体或罕见疾病代表性不足的问题。放射科医生还应注意人工智能部署后的偏差,如自动化偏差,或尽管有相反的证据,但仍倾向于同意自动决策。了解成像人工智能偏见的常见来源以及使用有偏见模型的后果,可以指导采取预防措施来减轻其影响。因此,作者将重点放在成像机器学习生命周期各阶段的偏见来源上,试图为在实践中使用人工智能工具或与数据科学家和工程师合作开发人工智能工具的普通放射科医生简化可能令人生畏的技术术语。作者回顾了人工智能中偏差的定义,描述了常见的偏差来源,并提出了指导质量控制措施的建议,以减轻成像人工智能中偏差的影响。本文以 CC BY 4.0 许可发布。本文的 "知识测试 "问题可在补充材料中找到。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Radiographics
Radiographics 医学-核医学
CiteScore
8.20
自引率
5.50%
发文量
224
审稿时长
4-8 weeks
期刊介绍: Launched by the Radiological Society of North America (RSNA) in 1981, RadioGraphics is one of the premier education journals in diagnostic radiology. Each bimonthly issue features 15–20 practice-focused articles spanning the full spectrum of radiologic subspecialties and addressing topics such as diagnostic imaging techniques, imaging features of a disease or group of diseases, radiologic-pathologic correlation, practice policy and quality initiatives, imaging physics, informatics, and lifelong learning. A special issue, a monograph focused on a single subspecialty or on a crossover topic of interest to multiple subspecialties, is published each October. Each issue offers more than a dozen opportunities to earn continuing medical education credits that qualify for AMA PRA Category 1 CreditTM and all online activities can be applied toward the ABR MOC Self-Assessment Requirement.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信