求助PDF
{"title":"Human-AI Interaction in the ScreenTrustCAD Trial: Recall Proportion and Positive Predictive Value Related to Screening Mammograms Flagged by AI CAD versus a Human Reader.","authors":"Karin E Dembrower, Alessio Crippa, Martin Eklund, Fredrik Strand","doi":"10.1148/radiol.242566","DOIUrl":null,"url":null,"abstract":"<p><p>Background The ScreenTrustCAD trial was a prospective study that evaluated the cancer detection rates for combinations of artificial intelligence (AI) computer-aided detection (CAD) and two radiologists. The results raised concerns about the tendency of radiologists to agree with AI CAD too much (when AI CAD made an erroneous flagging) or too little (when AI CAD made a correct flagging). Purpose To evaluate differences in recall proportion and positive predictive value (PPV) related to which reader flagged the mammogram for consensus discussion: AI CAD and/or radiologists. Materials and Methods Participants were enrolled from April 2021 to June 2022, and each examination was interpreted by three independent readers: two radiologists and AI CAD, after which positive findings were forwarded to the consensus discussion. For each combination of readers flagging an examination, the proportion recalled and the PPV were calculated by dividing the number of pathologic evaluation-verified cancers by the number of positive examinations. Results The study included 54 991 women (median age, 55 years [IQR, 46-65 years]), among whom 5489 were flagged for consensus discussion and 1348 were recalled. For examinations flagged by one reader, the proportion recalled after flagging by one radiologist was larger (14.2% [263 of 1858]) compared with flagging by AI CAD (4.6% [86 of 1886]) (<i>P</i> < .001), whereas the PPV of breast cancer was lower (3.4% [nine of 263] vs 22% [19 of 86]) (<i>P</i> < .001). For examinations flagged by two readers, the proportion recalled after flagging by two radiologists was larger (57.2% [360 of 629]) compared with flagging by AI CAD and one radiologist (38.6% [244 of 632]) (<i>P</i> < .001), whereas the PPV was lower (2.5% [nine of 360] vs 25.0% [61 of 244]) (<i>P</i> < .001). For examinations flagged by all three readers, the proportion recalled was 82.6% (400 of 484) and the PPV was 34.2 (137 of 400). Conclusion A larger proportion of participants were recalled after initial flagging by radiologists compared with those flagged by AI CAD, with a lower proportion of cancer. ClinicalTrials.gov Identifier: NCT04778670 © RSNA, 2025 See also the editorial by Grimm in this issue.</p>","PeriodicalId":20896,"journal":{"name":"Radiology","volume":"314 3","pages":"e242566"},"PeriodicalIF":12.1000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1148/radiol.242566","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
引用
批量引用
Abstract
Background The ScreenTrustCAD trial was a prospective study that evaluated the cancer detection rates for combinations of artificial intelligence (AI) computer-aided detection (CAD) and two radiologists. The results raised concerns about the tendency of radiologists to agree with AI CAD too much (when AI CAD made an erroneous flagging) or too little (when AI CAD made a correct flagging). Purpose To evaluate differences in recall proportion and positive predictive value (PPV) related to which reader flagged the mammogram for consensus discussion: AI CAD and/or radiologists. Materials and Methods Participants were enrolled from April 2021 to June 2022, and each examination was interpreted by three independent readers: two radiologists and AI CAD, after which positive findings were forwarded to the consensus discussion. For each combination of readers flagging an examination, the proportion recalled and the PPV were calculated by dividing the number of pathologic evaluation-verified cancers by the number of positive examinations. Results The study included 54 991 women (median age, 55 years [IQR, 46-65 years]), among whom 5489 were flagged for consensus discussion and 1348 were recalled. For examinations flagged by one reader, the proportion recalled after flagging by one radiologist was larger (14.2% [263 of 1858]) compared with flagging by AI CAD (4.6% [86 of 1886]) (P < .001), whereas the PPV of breast cancer was lower (3.4% [nine of 263] vs 22% [19 of 86]) (P < .001). For examinations flagged by two readers, the proportion recalled after flagging by two radiologists was larger (57.2% [360 of 629]) compared with flagging by AI CAD and one radiologist (38.6% [244 of 632]) (P < .001), whereas the PPV was lower (2.5% [nine of 360] vs 25.0% [61 of 244]) (P < .001). For examinations flagged by all three readers, the proportion recalled was 82.6% (400 of 484) and the PPV was 34.2 (137 of 400). Conclusion A larger proportion of participants were recalled after initial flagging by radiologists compared with those flagged by AI CAD, with a lower proportion of cancer. ClinicalTrials.gov Identifier: NCT04778670 © RSNA, 2025 See also the editorial by Grimm in this issue.