Ali S. Tejani, Yee Seng Ng, Yin Xi, Jesse C. Rayan
{"title":"Understanding and Mitigating Bias in Imaging Artificial Intelligence","authors":"Ali S. Tejani, Yee Seng Ng, Yin Xi, Jesse C. Rayan","doi":"10.1148/rg.230067","DOIUrl":null,"url":null,"abstract":"<p>Artificial intelligence (AI) algorithms are prone to bias at multiple stages of model development, with potential for exacerbating health disparities. However, bias in imaging AI is a complex topic that encompasses multiple coexisting definitions. <i>Bias</i> may refer to unequal preference to a person or group owing to preexisting attitudes or beliefs, either intentional or unintentional. However, <i>cognitive bias</i> refers to systematic deviation from objective judgment due to reliance on heuristics, and <i>statistical bias</i> refers to differences between true and expected values, commonly manifesting as systematic error in model prediction (ie, a model with output unrepresentative of real-world conditions). Clinical decisions informed by biased models may lead to patient harm due to action on inaccurate AI results or exacerbate health inequities due to differing performance among patient populations. However, while inequitable bias can harm patients in this context, a mindful approach leveraging equitable bias can address underrepresentation of minority groups or rare diseases. Radiologists should also be aware of bias after AI deployment such as automation bias, or a tendency to agree with automated decisions despite contrary evidence. Understanding common sources of imaging AI bias and the consequences of using biased models can guide preventive measures to mitigate its impact. Accordingly, the authors focus on sources of bias at stages along the imaging machine learning life cycle, attempting to simplify potentially intimidating technical terminology for general radiologists using AI tools in practice or collaborating with data scientists and engineers for AI tool development. The authors review definitions of bias in AI, describe common sources of bias, and present recommendations to guide quality control measures to mitigate the impact of bias in imaging AI. Understanding the terms featured in this article will enable a proactive approach to identifying and mitigating bias in imaging AI.</p><p>Published under a CC BY 4.0 license.</p><p>Test Your Knowledge questions for this article are available in the supplemental material.</p><p>See the invited commentary by Rouzrokh and Erickson in this issue.</p>","PeriodicalId":54512,"journal":{"name":"Radiographics","volume":"50 1","pages":""},"PeriodicalIF":5.2000,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiographics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1148/rg.230067","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence (AI) algorithms are prone to bias at multiple stages of model development, with potential for exacerbating health disparities. However, bias in imaging AI is a complex topic that encompasses multiple coexisting definitions. Bias may refer to unequal preference to a person or group owing to preexisting attitudes or beliefs, either intentional or unintentional. However, cognitive bias refers to systematic deviation from objective judgment due to reliance on heuristics, and statistical bias refers to differences between true and expected values, commonly manifesting as systematic error in model prediction (ie, a model with output unrepresentative of real-world conditions). Clinical decisions informed by biased models may lead to patient harm due to action on inaccurate AI results or exacerbate health inequities due to differing performance among patient populations. However, while inequitable bias can harm patients in this context, a mindful approach leveraging equitable bias can address underrepresentation of minority groups or rare diseases. Radiologists should also be aware of bias after AI deployment such as automation bias, or a tendency to agree with automated decisions despite contrary evidence. Understanding common sources of imaging AI bias and the consequences of using biased models can guide preventive measures to mitigate its impact. Accordingly, the authors focus on sources of bias at stages along the imaging machine learning life cycle, attempting to simplify potentially intimidating technical terminology for general radiologists using AI tools in practice or collaborating with data scientists and engineers for AI tool development. The authors review definitions of bias in AI, describe common sources of bias, and present recommendations to guide quality control measures to mitigate the impact of bias in imaging AI. Understanding the terms featured in this article will enable a proactive approach to identifying and mitigating bias in imaging AI.
Published under a CC BY 4.0 license.
Test Your Knowledge questions for this article are available in the supplemental material.
See the invited commentary by Rouzrokh and Erickson in this issue.
人工智能(AI)算法在模型开发的多个阶段都容易出现偏差,有可能加剧健康差异。然而,成像人工智能中的偏见是一个复杂的话题,包含多种并存的定义。偏见可能是指由于事先存在的态度或信念,有意或无意地对某个人或群体的不平等偏好。然而,认知偏差指的是由于依赖启发式方法而系统性地偏离客观判断,统计偏差指的是真实值与预期值之间的差异,通常表现为模型预测中的系统误差(即模型的输出不能代表真实世界的情况)。根据有偏差的模型做出的临床决策,可能会因为根据不准确的人工智能结果采取行动而导致对患者的伤害,或者由于患者群体之间的表现不同而加剧健康不公平。不过,虽然在这种情况下不公平的偏见可能会伤害患者,但利用公平偏见的谨慎方法可以解决少数群体或罕见疾病代表性不足的问题。放射科医生还应注意人工智能部署后的偏差,如自动化偏差,或尽管有相反的证据,但仍倾向于同意自动决策。了解成像人工智能偏见的常见来源以及使用有偏见模型的后果,可以指导采取预防措施来减轻其影响。因此,作者将重点放在成像机器学习生命周期各阶段的偏见来源上,试图为在实践中使用人工智能工具或与数据科学家和工程师合作开发人工智能工具的普通放射科医生简化可能令人生畏的技术术语。作者回顾了人工智能中偏差的定义,描述了常见的偏差来源,并提出了指导质量控制措施的建议,以减轻成像人工智能中偏差的影响。本文以 CC BY 4.0 许可发布。本文的 "知识测试 "问题可在补充材料中找到。
期刊介绍:
Launched by the Radiological Society of North America (RSNA) in 1981, RadioGraphics is one of the premier education journals in diagnostic radiology. Each bimonthly issue features 15–20 practice-focused articles spanning the full spectrum of radiologic subspecialties and addressing topics such as diagnostic imaging techniques, imaging features of a disease or group of diseases, radiologic-pathologic correlation, practice policy and quality initiatives, imaging physics, informatics, and lifelong learning.
A special issue, a monograph focused on a single subspecialty or on a crossover topic of interest to multiple subspecialties, is published each October.
Each issue offers more than a dozen opportunities to earn continuing medical education credits that qualify for AMA PRA Category 1 CreditTM and all online activities can be applied toward the ABR MOC Self-Assessment Requirement.