John Brandon Graham-Knight, Pengkun Liang, Wenna Lin, Quinn Wright, Hua Shen, Colin Mar, Janette Sam, Rasika Rajapakshe
{"title":"External Testing of a Commercial AI Algorithm for Breast Cancer Detection at Screening Mammography.","authors":"John Brandon Graham-Knight, Pengkun Liang, Wenna Lin, Quinn Wright, Hua Shen, Colin Mar, Janette Sam, Rasika Rajapakshe","doi":"10.1148/ryai.240287","DOIUrl":null,"url":null,"abstract":"<p><p>Purpose To test a commercial artificial intelligence (AI) system for breast cancer detection at the BC Cancer Breast Screening Program. Materials and Methods In this retrospective study of 136 700 female individuals (mean age, 58.8 years ± 9.4 [SD]; median, 59.0 years; IQR = 14.0) who underwent digital mammography screening in British Columbia, Canada, between February 2019 and January 2020, breast cancer detection performance of a commercial AI algorithm was stratified by demographic, clinical, and imaging features and evaluated using the area under the receiver operating characteristic curve (AUC), and AI performance was compared with radiologists, using sensitivity and specificity. Results At 1-year follow-up, the AUC of the AI algorithm was 0.93 (95% CI: 0.92, 0.94) for breast cancer detection. Statistically significant differences were found for mammograms across radiologist-assigned Breast Imaging Reporting and Data System breast densities: category A, AUC of 0.96 (95% CI: 0.94, 0.99); category B, AUC of 0.94 (95% CI: 0.92, 0.95); category C, AUC of 0.93 (95% CI: 0.91, 0.95), and category D, AUC of 0.84 (95% CI: 0.76, 0.91) (A<sub>AUC</sub> > D<sub>AUC</sub>, <i>P</i> = .002; B<sub>AUC</sub> > D<sub>AUC</sub>, <i>P</i> = .009; C<sub>AUC</sub> > D<sub>AUC</sub>, <i>P</i> = .02). The AI showed higher performance for mammograms with architectural distortion (0.96 [95% CI: 0.94, 0.98]) versus without (0.92 [95% CI: 0.90, 0.93], <i>P</i> = .003) and lower performance for mammograms with calcification (0.87 [95% CI: 0.85, 0.90]) versus without (0.92 [95% CI: 0.91, 0.94], <i>P</i> < .001). Sensitivity of radiologists (92.6% ± 1.0) exceeded the AI algorithm (89.4% ± 1.1, <i>P</i> = .01), but there was no evidence of difference at 2-year follow-up (83.5% ± 1.2 vs 84.3% ± 1.2, <i>P</i> = .69). Conclusion The tested commercial AI algorithm is generalizable for a large external breast cancer screening cohort from Canada but showed different performance for some subgroups, including those with architectural distortion or calcification in the image. <b>Keywords:</b> Mammography, QA/QC, Screening, Technology Assessment, Screening Mammography, Artificial Intelligence, Breast Cancer, Model Testing, Bias and Fairness <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also commentary by Milch and Lee in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240287"},"PeriodicalIF":8.1000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology-Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1148/ryai.240287","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose To test a commercial artificial intelligence (AI) system for breast cancer detection at the BC Cancer Breast Screening Program. Materials and Methods In this retrospective study of 136 700 female individuals (mean age, 58.8 years ± 9.4 [SD]; median, 59.0 years; IQR = 14.0) who underwent digital mammography screening in British Columbia, Canada, between February 2019 and January 2020, breast cancer detection performance of a commercial AI algorithm was stratified by demographic, clinical, and imaging features and evaluated using the area under the receiver operating characteristic curve (AUC), and AI performance was compared with radiologists, using sensitivity and specificity. Results At 1-year follow-up, the AUC of the AI algorithm was 0.93 (95% CI: 0.92, 0.94) for breast cancer detection. Statistically significant differences were found for mammograms across radiologist-assigned Breast Imaging Reporting and Data System breast densities: category A, AUC of 0.96 (95% CI: 0.94, 0.99); category B, AUC of 0.94 (95% CI: 0.92, 0.95); category C, AUC of 0.93 (95% CI: 0.91, 0.95), and category D, AUC of 0.84 (95% CI: 0.76, 0.91) (AAUC > DAUC, P = .002; BAUC > DAUC, P = .009; CAUC > DAUC, P = .02). The AI showed higher performance for mammograms with architectural distortion (0.96 [95% CI: 0.94, 0.98]) versus without (0.92 [95% CI: 0.90, 0.93], P = .003) and lower performance for mammograms with calcification (0.87 [95% CI: 0.85, 0.90]) versus without (0.92 [95% CI: 0.91, 0.94], P < .001). Sensitivity of radiologists (92.6% ± 1.0) exceeded the AI algorithm (89.4% ± 1.1, P = .01), but there was no evidence of difference at 2-year follow-up (83.5% ± 1.2 vs 84.3% ± 1.2, P = .69). Conclusion The tested commercial AI algorithm is generalizable for a large external breast cancer screening cohort from Canada but showed different performance for some subgroups, including those with architectural distortion or calcification in the image. Keywords: Mammography, QA/QC, Screening, Technology Assessment, Screening Mammography, Artificial Intelligence, Breast Cancer, Model Testing, Bias and Fairness Supplemental material is available for this article. Published under a CC BY 4.0 license. See also commentary by Milch and Lee in this issue.
期刊介绍:
Radiology: Artificial Intelligence is a bi-monthly publication that focuses on the emerging applications of machine learning and artificial intelligence in the field of imaging across various disciplines. This journal is available online and accepts multiple manuscript types, including Original Research, Technical Developments, Data Resources, Review articles, Editorials, Letters to the Editor and Replies, Special Reports, and AI in Brief.