{"title":"Establishing a Chain of Evidence for AI in Radiology: Sham AI and Randomized Controlled Trials.","authors":"John D Mayfield, Javier Romero","doi":"10.1148/ryai.250334","DOIUrl":"10.1148/ryai.250334","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250334"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144162292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heidi J Huhtanen, Mikko J Nyman, Antti Karlsson, Jussi Hirvonen
{"title":"Machine Learning and Deep Learning Models for Automated Protocoling of Emergency Brain MRI Using Text from Clinical Referrals.","authors":"Heidi J Huhtanen, Mikko J Nyman, Antti Karlsson, Jussi Hirvonen","doi":"10.1148/ryai.230620","DOIUrl":"10.1148/ryai.230620","url":null,"abstract":"<p><p>Purpose To develop and evaluate machine learning and deep learning-based models for automated protocoling of emergency brain MRI scans based on clinical referral text. Materials and Methods In this single-institution, retrospective study of 1953 emergency brain MRI referrals from January 2016 to January 2019, two neuroradiologists labeled the imaging protocol and use of contrast agent as the reference standard. Three machine learning algorithms (naive Bayes, support vector machine, and XGBoost) and two pretrained deep learning models (Finnish bidirectional encoder representations from transformers [BERT] and generative pretrained transformer [GPT]-3.5 [GPT-3.5 Turbo; Open AI]) were developed to predict the MRI protocol and need for a contrast agent. Each model was trained with three datasets (100% of training data, 50% of training data, and 50% plus augmented training data). Prediction accuracy was assessed with a test set. Results The GPT-3.5 models trained with 100% of the training data performed best in both tasks, achieving an accuracy of 84% (95% CI: 80, 88) for the correct protocol and 91% (95% CI: 88, 94) for the contrast agent. BERT had an accuracy of 78% (95% CI: 74, 82) for the protocol and 89% (95% CI: 86, 92) for the contrast agent. The best machine learning model in the protocol task was XGBoost (accuracy, 78%; 95% CI: 73, 82), and the best machine learning models in the contrast agent task were support vector machine and XGBoost (accuracy, 88%; 95% CI: 84, 91 for both). The accuracies of two nonneuroradiologists were 80%-83% in the protocol task and 89%-91% in the contrast medium task. Conclusion Machine learning and deep learning models demonstrated high performance in automatic protocoling of emergency brain MRI scans based on text from clinical referrals. <b>Keywords:</b> Natural Language Processing, Automatic Protocoling, Deep Learning, Machine Learning, Emergency Brain MRI <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also commentary by Strotzer in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230620"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John Brandon Graham-Knight, Pengkun Liang, Wenna Lin, Quinn Wright, Hua Shen, Colin Mar, Janette Sam, Rasika Rajapakshe
{"title":"External Testing of a Commercial AI Algorithm for Breast Cancer Detection at Screening Mammography.","authors":"John Brandon Graham-Knight, Pengkun Liang, Wenna Lin, Quinn Wright, Hua Shen, Colin Mar, Janette Sam, Rasika Rajapakshe","doi":"10.1148/ryai.240287","DOIUrl":"10.1148/ryai.240287","url":null,"abstract":"<p><p>Purpose To test a commercial artificial intelligence (AI) system for breast cancer detection at the BC Cancer Breast Screening Program. Materials and Methods In this retrospective study of 136 700 female individuals (mean age, 58.8 years ± 9.4 [SD]; median, 59.0 years; IQR = 14.0) who underwent digital mammography screening in British Columbia, Canada, between February 2019 and January 2020, breast cancer detection performance of a commercial AI algorithm was stratified by demographic, clinical, and imaging features and evaluated using the area under the receiver operating characteristic curve (AUC), and AI performance was compared with radiologists, using sensitivity and specificity. Results At 1-year follow-up, the AUC of the AI algorithm was 0.93 (95% CI: 0.92, 0.94) for breast cancer detection. Statistically significant differences were found for mammograms across radiologist-assigned Breast Imaging Reporting and Data System breast densities: category A, AUC of 0.96 (95% CI: 0.94, 0.99); category B, AUC of 0.94 (95% CI: 0.92, 0.95); category C, AUC of 0.93 (95% CI: 0.91, 0.95), and category D, AUC of 0.84 (95% CI: 0.76, 0.91) (A<sub>AUC</sub> > D<sub>AUC</sub>, <i>P</i> = .002; B<sub>AUC</sub> > D<sub>AUC</sub>, <i>P</i> = .009; C<sub>AUC</sub> > D<sub>AUC</sub>, <i>P</i> = .02). The AI showed higher performance for mammograms with architectural distortion (0.96 [95% CI: 0.94, 0.98]) versus without (0.92 [95% CI: 0.90, 0.93], <i>P</i> = .003) and lower performance for mammograms with calcification (0.87 [95% CI: 0.85, 0.90]) versus without (0.92 [95% CI: 0.91, 0.94], <i>P</i> < .001). Sensitivity of radiologists (92.6% ± 1.0) exceeded the AI algorithm (89.4% ± 1.1, <i>P</i> = .01), but there was no evidence of difference at 2-year follow-up (83.5% ± 1.2 vs 84.3% ± 1.2, <i>P</i> = .69). Conclusion The tested commercial AI algorithm is generalizable for a large external breast cancer screening cohort from Canada but showed different performance for some subgroups, including those with architectural distortion or calcification in the image. <b>Keywords:</b> Mammography, QA/QC, Screening, Technology Assessment, Screening Mammography, Artificial Intelligence, Breast Cancer, Model Testing, Bias and Fairness <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also commentary by Milch and Lee in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240287"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143606545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Establishing the Evidence Needed for AI-driven Mammography Screening.","authors":"Hannah S Milch, Christoph I Lee","doi":"10.1148/ryai.250152","DOIUrl":"10.1148/ryai.250152","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250152"},"PeriodicalIF":13.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12127946/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143764707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eduardo Moreno Júdice de Mattos Farina, Paulo Eduardo de Aguiar Kuriki
{"title":"Predicting Mortality with Deep Learning: Are Metrics Alone Enough?","authors":"Eduardo Moreno Júdice de Mattos Farina, Paulo Eduardo de Aguiar Kuriki","doi":"10.1148/ryai.250224","DOIUrl":"10.1148/ryai.250224","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250224"},"PeriodicalIF":13.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144062366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}