External Testing of a Commercial AI Algorithm for Breast Cancer Detection at Screening Mammography.

IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
John Brandon Graham-Knight, Pengkun Liang, Wenna Lin, Quinn Wright, Hua Shen, Colin Mar, Janette Sam, Rasika Rajapakshe
{"title":"External Testing of a Commercial AI Algorithm for Breast Cancer Detection at Screening Mammography.","authors":"John Brandon Graham-Knight, Pengkun Liang, Wenna Lin, Quinn Wright, Hua Shen, Colin Mar, Janette Sam, Rasika Rajapakshe","doi":"10.1148/ryai.240287","DOIUrl":null,"url":null,"abstract":"<p><p>Purpose To test a commercial artificial intelligence (AI) system for breast cancer detection at the BC Cancer Breast Screening Program. Materials and Methods In this retrospective study of 136 700 female individuals (mean age, 58.8 years ± 9.4 [SD]; median, 59.0 years; IQR = 14.0) who underwent digital mammography screening in British Columbia, Canada, between February 2019 and January 2020, breast cancer detection performance of a commercial AI algorithm was stratified by demographic, clinical, and imaging features and evaluated using the area under the receiver operating characteristic curve (AUC), and AI performance was compared with radiologists, using sensitivity and specificity. Results At 1-year follow-up, the AUC of the AI algorithm was 0.93 (95% CI: 0.92, 0.94) for breast cancer detection. Statistically significant differences were found for mammograms across radiologist-assigned Breast Imaging Reporting and Data System breast densities: category A, AUC of 0.96 (95% CI: 0.94, 0.99); category B, AUC of 0.94 (95% CI: 0.92, 0.95); category C, AUC of 0.93 (95% CI: 0.91, 0.95), and category D, AUC of 0.84 (95% CI: 0.76, 0.91) (A<sub>AUC</sub> > D<sub>AUC</sub>, <i>P</i> = .002; B<sub>AUC</sub> > D<sub>AUC</sub>, <i>P</i> = .009; C<sub>AUC</sub> > D<sub>AUC</sub>, <i>P</i> = .02). The AI showed higher performance for mammograms with architectural distortion (0.96 [95% CI: 0.94, 0.98]) versus without (0.92 [95% CI: 0.90, 0.93], <i>P</i> = .003) and lower performance for mammograms with calcification (0.87 [95% CI: 0.85, 0.90]) versus without (0.92 [95% CI: 0.91, 0.94], <i>P</i> < .001). Sensitivity of radiologists (92.6% ± 1.0) exceeded the AI algorithm (89.4% ± 1.1, <i>P</i> = .01), but there was no evidence of difference at 2-year follow-up (83.5% ± 1.2 vs 84.3% ± 1.2, <i>P</i> = .69). Conclusion The tested commercial AI algorithm is generalizable for a large external breast cancer screening cohort from Canada but showed different performance for some subgroups, including those with architectural distortion or calcification in the image. <b>Keywords:</b> Mammography, QA/QC, Screening, Technology Assessment, Screening Mammography, Artificial Intelligence, Breast Cancer, Model Testing, Bias and Fairness <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also commentary by Milch and Lee in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240287"},"PeriodicalIF":8.1000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology-Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1148/ryai.240287","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose To test a commercial artificial intelligence (AI) system for breast cancer detection at the BC Cancer Breast Screening Program. Materials and Methods In this retrospective study of 136 700 female individuals (mean age, 58.8 years ± 9.4 [SD]; median, 59.0 years; IQR = 14.0) who underwent digital mammography screening in British Columbia, Canada, between February 2019 and January 2020, breast cancer detection performance of a commercial AI algorithm was stratified by demographic, clinical, and imaging features and evaluated using the area under the receiver operating characteristic curve (AUC), and AI performance was compared with radiologists, using sensitivity and specificity. Results At 1-year follow-up, the AUC of the AI algorithm was 0.93 (95% CI: 0.92, 0.94) for breast cancer detection. Statistically significant differences were found for mammograms across radiologist-assigned Breast Imaging Reporting and Data System breast densities: category A, AUC of 0.96 (95% CI: 0.94, 0.99); category B, AUC of 0.94 (95% CI: 0.92, 0.95); category C, AUC of 0.93 (95% CI: 0.91, 0.95), and category D, AUC of 0.84 (95% CI: 0.76, 0.91) (AAUC > DAUC, P = .002; BAUC > DAUC, P = .009; CAUC > DAUC, P = .02). The AI showed higher performance for mammograms with architectural distortion (0.96 [95% CI: 0.94, 0.98]) versus without (0.92 [95% CI: 0.90, 0.93], P = .003) and lower performance for mammograms with calcification (0.87 [95% CI: 0.85, 0.90]) versus without (0.92 [95% CI: 0.91, 0.94], P < .001). Sensitivity of radiologists (92.6% ± 1.0) exceeded the AI algorithm (89.4% ± 1.1, P = .01), but there was no evidence of difference at 2-year follow-up (83.5% ± 1.2 vs 84.3% ± 1.2, P = .69). Conclusion The tested commercial AI algorithm is generalizable for a large external breast cancer screening cohort from Canada but showed different performance for some subgroups, including those with architectural distortion or calcification in the image. Keywords: Mammography, QA/QC, Screening, Technology Assessment, Screening Mammography, Artificial Intelligence, Breast Cancer, Model Testing, Bias and Fairness Supplemental material is available for this article. Published under a CC BY 4.0 license. See also commentary by Milch and Lee in this issue.

用于筛查乳房x光检查的商业人工智能算法的外部测试。
“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的在BC省乳腺癌筛查项目中测试用于乳腺癌检测的商业人工智能(AI)系统。材料与方法本回顾性研究纳入136,700名女性(年龄:µ= 58.8,σ = 9.4, M = 59.0, IQR = 14.0), 2019年2月至2020年1月期间在加拿大不列颠哥伦比亚省接受数字乳房x线摄影筛查的女性,根据人口统计学、临床和影像学特征对商业人工智能算法的乳腺癌检测性能进行分层,并使用受试者工作特征曲线(AUC)进行评估,并将人工智能性能与放射科医生进行敏感性和特异性比较。结果1年随访时,人工智能算法的乳腺癌检测AUC为0.93 (95% CI: 0.92-0.94)。不同放射科医师指定的BI-RADS乳腺密度的乳房x线照片差异有统计学意义——a: 0.96 (0.94-0.91);B: 0.94 (0.92-0.95);C: 0.93 (0.91-0.95), D: 0.84 (0.76-0.91) (AAUC > DAUC, P = 0.002;Bauc > dauc, p = .009;Cauc > dac, p = .02)。人工智能对乳腺结构畸变(0.96,0.94-0.98)的诊断效果较好(0.92,0.90-0.93,P = 0.003),对钙化(0.87,0.85-0.90)的诊断效果较差(0.92,0.91-0.94,P < 0.001)。放射科医师的敏感性(92.6±1.0%)超过人工智能算法的敏感性(89.4±1.1%);P = 0.01),但在2年随访时无差异(83.5±1.2% vs 84.3±1.2%;P = 0.69)。结论已测试的商业AI算法适用于加拿大的大型乳腺癌外部筛查队列,但在某些亚组中表现不同,包括图像中的结构扭曲或钙化。©RSNA, 2025年。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
16.20
自引率
1.00%
发文量
0
期刊介绍: Radiology: Artificial Intelligence is a bi-monthly publication that focuses on the emerging applications of machine learning and artificial intelligence in the field of imaging across various disciplines. This journal is available online and accepts multiple manuscript types, including Original Research, Technical Developments, Data Resources, Review articles, Editorials, Letters to the Editor and Replies, Special Reports, and AI in Brief.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信