{"title":"2024 Manuscript Reviewers: A Note of Thanks.","authors":"Umar Mahmood, Charles E Kahn","doi":"10.1148/ryai.250163","DOIUrl":"https://doi.org/10.1148/ryai.250163","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 2","pages":"e250163"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xin Tie, Muheon Shin, Changhee Lee, Scott B Perlman, Zachary Huemann, Amy J Weisman, Sharon M Castellino, Kara M Kelly, Kathleen M McCarten, Adina L Alazraki, Junjie Hu, Steve Y Cho, Tyler J Bradshaw
Heidi J Huhtanen, Mikko J Nyman, Antti Karlsson, Jussi Hirvonen
{"title":"Machine Learning and Deep Learning Models for Automated Protocoling of Emergency Brain MRI Using Text from Clinical Referrals.","authors":"Heidi J Huhtanen, Mikko J Nyman, Antti Karlsson, Jussi Hirvonen","doi":"10.1148/ryai.230620","DOIUrl":"https://doi.org/10.1148/ryai.230620","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and evaluate machine learning and deep learning-based models for automated protocoling of emergency brain MRI scans based on clinical referral text. Materials and Methods In this single-institution, retrospective study of 1953 emergency brain MRI referrals from January 2016 to January 2019, two neuroradiologists labeled the imaging protocol and use of contrast agent as the reference standard. Three machine learning algorithms (Naïve Bayes, support vector machine, and XGBoost) and two pretrained deep learning models (Finnish BERT and GPT-3.5) were developed to predict the MRI protocol and need for contrast agent. Each model was trained with three datasets (100% of training data, 50% of training data, and 50% + augmented training data). Prediction accuracy was assessed with test set. Results The GPT-3.5 models trained with 100% of the training data performed best in both tasks, achieving accuracy of 84% (95% CI: 80%-88%) for the correct protocol and 91% (95% CI: 88%-94%) for contrast. BERT had an accuracy of 78% (95% CI: 74%-82%) for the protocol and 89% (95% CI: 86%-92%) for contrast. The best machine learning model in the protocol task was XGBoost (accuracy 78% [95% CI: 73%-82%]), and in the contrast agent task support vector machine and XGBoost (accuracy 88% [95% CI: 84%-91%] for both). The accuracies of two nonneuroradiologists were 80%-83% in the protocol task and 89%-91% in the contrast medium task. Conclusion Machine learning and deep learning models demonstrated high performance in automatic protocoling emergency brain MRI scans based on text from clinical referrals. Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230620"},"PeriodicalIF":8.1,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marit A Martiniussen, Marthe Larsen, Tone Hovda, Merete U Kristiansen, Fredrik A Dahl, Line Eikvil, Olav Brautaset, Atle Bjørnerud, Vessela Kristensen, Marie B Bergan, Solveig Hofvind
José Guilherme de Almeida, Nuno M Rodrigues, Ana Sofia Castro Verde, Ana Mascarenhas Gaivão, Carlos Bilreiro, Inês Santiago, Joana Ip, Sara Belião, Celso Matos, Sara Silva, Manolis Tsiknakis, Kostantinos Marias, Daniele Regge, Nikolaos Papanikolaou
{"title":"Impact of Scanner Manufacturer, Endorectal Coil Use, and Clinical Variables on Deep Learning-assisted Prostate Cancer Classification Using Multiparametric MRI.","authors":"José Guilherme de Almeida, Nuno M Rodrigues, Ana Sofia Castro Verde, Ana Mascarenhas Gaivão, Carlos Bilreiro, Inês Santiago, Joana Ip, Sara Belião, Celso Matos, Sara Silva, Manolis Tsiknakis, Kostantinos Marias, Daniele Regge, Nikolaos Papanikolaou","doi":"10.1148/ryai.230555","DOIUrl":"https://doi.org/10.1148/ryai.230555","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To assess the impact of scanner manufacturer and scan protocol on the performance of deep learning models to classify prostate cancer (PCa) aggressiveness on biparametric MRI (bpMRI). Materials and Methods In this retrospective study, 5,478 cases from ProstateNet, a PCa bpMRI dataset with examinations from 13 centers, were used to develop five deep learning (DL) models to predict PCa aggressiveness with minimal lesion information and test how using data from different subgroups-scanner manufacturers and endorectal coil (ERC) use (Siemens, Philips, GE with and without ERC and the full dataset)-impacts model performance. Performance was assessed using the area under the receiver operating characteristic curve (AUC). The impact of clinical features (age, prostate-specific antigen level, Prostate Imaging Reporting and Data System [PI-RADS] score) on model performance was also evaluated. Results DL models were trained on 4,328 bpMRI cases, and the best model achieved AUC = 0.73 when trained and tested using data from all manufacturers. Hold-out test set performance was higher when models trained with data from a manufacturer were tested on the same manufacturer (within-and between-manufacturer AUC differences of 0.05 on average, <i>P</i> < .001). The addition of clinical features did not improve performance (<i>P</i> = .24). Learning curve analyses showed that performance remained stable as training data increased. Analysis of DL features showed that scanner manufacturer and scan protocol heavily influenced feature distributions. Conclusion In automated classification of PCa aggressiveness using bpMRI data, scanner manufacturer and endorectal coil use had a major impact on DL model performance and features. Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230555"},"PeriodicalIF":8.1,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, George Shih, John Mongan, Savvas Nicolaou, Brett S Marinelli, Melissa A Davis, Kirti Magudia, Ervin Sejdić, Errol Colak