Pitfalls and Best Practices in Evaluation of AI Algorithmic Biases in Radiology.

IF 12.1 1区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Radiology Pub Date : 2025-05-01 DOI:10.1148/radiol.241674
Paul H Yi, Preetham Bachina, Beepul Bharti, Sean P Garin, Adway Kanhere, Pranav Kulkarni, David Li, Vishwa S Parekh, Samantha M Santomartino, Linda Moy, Jeremias Sulam
{"title":"Pitfalls and Best Practices in Evaluation of AI Algorithmic Biases in Radiology.","authors":"Paul H Yi, Preetham Bachina, Beepul Bharti, Sean P Garin, Adway Kanhere, Pranav Kulkarni, David Li, Vishwa S Parekh, Samantha M Santomartino, Linda Moy, Jeremias Sulam","doi":"10.1148/radiol.241674","DOIUrl":null,"url":null,"abstract":"<p><p>Despite growing awareness of problems with fairness in artificial intelligence (AI) models in radiology, evaluation of algorithmic biases, or AI biases, remains challenging due to various complexities. These include incomplete reporting of demographic information in medical imaging datasets, variability in definitions of demographic categories, and inconsistent statistical definitions of bias. To guide the appropriate evaluation of AI biases in radiology, this article summarizes the pitfalls in the evaluation and measurement of algorithmic biases. These pitfalls span the spectrum from the technical (eg, how different statistical definitions of bias impact conclusions about whether an AI model is biased) to those associated with social context (eg, how different conventions of race and ethnicity impact identification or masking of biases). Actionable best practices and future directions to avoid these pitfalls are summarized across three key areas: <i>(a)</i> medical imaging datasets, <i>(b)</i> demographic definitions, and <i>(c)</i> statistical evaluations of bias. Although AI bias in radiology has been broadly reviewed in the recent literature, this article focuses specifically on underrecognized potential pitfalls related to the three key areas. By providing awareness of these pitfalls along with actionable practices to avoid them, exciting AI technologies can be used in radiology for the good of all people.</p>","PeriodicalId":20896,"journal":{"name":"Radiology","volume":"315 2","pages":"e241674"},"PeriodicalIF":12.1000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12127964/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1148/radiol.241674","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Despite growing awareness of problems with fairness in artificial intelligence (AI) models in radiology, evaluation of algorithmic biases, or AI biases, remains challenging due to various complexities. These include incomplete reporting of demographic information in medical imaging datasets, variability in definitions of demographic categories, and inconsistent statistical definitions of bias. To guide the appropriate evaluation of AI biases in radiology, this article summarizes the pitfalls in the evaluation and measurement of algorithmic biases. These pitfalls span the spectrum from the technical (eg, how different statistical definitions of bias impact conclusions about whether an AI model is biased) to those associated with social context (eg, how different conventions of race and ethnicity impact identification or masking of biases). Actionable best practices and future directions to avoid these pitfalls are summarized across three key areas: (a) medical imaging datasets, (b) demographic definitions, and (c) statistical evaluations of bias. Although AI bias in radiology has been broadly reviewed in the recent literature, this article focuses specifically on underrecognized potential pitfalls related to the three key areas. By providing awareness of these pitfalls along with actionable practices to avoid them, exciting AI technologies can be used in radiology for the good of all people.

放射学中人工智能算法偏差评估的陷阱和最佳实践。
尽管人们越来越意识到放射学中人工智能(AI)模型的公平性问题,但由于各种复杂性,评估算法偏差(或AI偏差)仍然具有挑战性。这些问题包括医学影像数据集中人口统计信息的不完整报告、人口统计分类定义的差异以及对偏倚的统计定义不一致。为了指导对放射学中人工智能偏差的适当评估,本文总结了评估和测量算法偏差的陷阱。这些陷阱从技术层面(例如,偏见的不同统计定义如何影响人工智能模型是否有偏见的结论)到与社会背景相关的问题(例如,种族和民族的不同惯例如何影响偏见的识别或掩盖)。可操作的最佳做法和未来的方向,以避免这些陷阱总结在三个关键领域:(a)医学成像数据集,(b)人口统计定义,和(c)偏见的统计评估。尽管人工智能在放射学中的偏见在最近的文献中得到了广泛的回顾,但本文特别关注与三个关键领域相关的未被充分认识的潜在陷阱。通过提供对这些陷阱的认识以及可操作的做法来避免它们,令人兴奋的人工智能技术可以用于放射学,造福所有人。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Radiology
Radiology 医学-核医学
CiteScore
35.20
自引率
3.00%
发文量
596
审稿时长
3.6 months
期刊介绍: Published regularly since 1923 by the Radiological Society of North America (RSNA), Radiology has long been recognized as the authoritative reference for the most current, clinically relevant and highest quality research in the field of radiology. Each month the journal publishes approximately 240 pages of peer-reviewed original research, authoritative reviews, well-balanced commentary on significant articles, and expert opinion on new techniques and technologies. Radiology publishes cutting edge and impactful imaging research articles in radiology and medical imaging in order to help improve human health.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信