Daniel C Mann, Michael W Rutherford, Phillip Farmer, Joshua M Eichhorn, Fathima Fijula Palot Manzil, Christopher P Wardell
{"title":"Evaluating Skellytour for Automated Skeleton Segmentation from Whole-Body CT Images.","authors":"Daniel C Mann, Michael W Rutherford, Phillip Farmer, Joshua M Eichhorn, Fathima Fijula Palot Manzil, Christopher P Wardell","doi":"10.1148/ryai.240050","DOIUrl":"https://doi.org/10.1148/ryai.240050","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To construct and evaluate the performance of a machine learning model for bone segmentation using whole-body CT images. Materials and Methods In this retrospective study, whole-body CT scans (June 2010 to January 2018) from 90 patients (mean age, 61 ± [SD] 9 years; 45 male, 45 female) with multiple myeloma were manually segmented using 60 labels and subsegmented into cortical and trabecular bone. Segmentations were verified by board-certified radiology and nuclear medicine physicians. The impacts of isotropy, resolution, multiple labeling schemes, and postprocessing were assessed. Model performance was assessed on internal and external test datasets (<i>n</i> = 362 scans) and benchmarked against the TotalSegmentator segmentation model. Performance was assessed using Dice similarity coefficient (DSC), normalized surface distance (NSD), and manual inspection. Results Skellytour achieved consistently high segmentation performance on the internal dataset (DSC: 0.94, NSD: 0.99) and two external datasets (DSC: 0.94, 0.96, NSD: 0.999, 1.0), outperforming TotalSegmentator on the first two datasets. Subsegmentation performance was also high (DSC: 0.95, NSD: 0.995). Skellytour produced finely detailed segmentations, even in low density bones. Conclusion The study demonstrates that Skellytour is an accurate and generalizable bone segmentation and subsegmentation model for CT data and is available as a Python package via GitHub (https://github.com/cpwardell/Skellytour). Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240050"},"PeriodicalIF":8.1,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xin Tie, Muheon Shin, Changhee Lee, Scott B Perlman, Zachary Huemann, Amy J Weisman, Sharon M Castellino, Kara M Kelly, Kathleen M McCarten, Adina L Alazraki, Junjie Hu, Steve Y Cho, Tyler J Bradshaw
Hyuk Jin Yun, Han-Jui Lee, Sungmin You, Joo Young Lee, Jerjes Aguirre-Chavez, Lana Vasung, Hyun Ju Lee, Tomo Tarui, Henry A Feldman, P Ellen Grant, Kiho Im
Heidi J Huhtanen, Mikko J Nyman, Antti Karlsson, Jussi Hirvonen
{"title":"Machine Learning and Deep Learning Models for Automated Protocoling of Emergency Brain MRI Using Text from Clinical Referrals.","authors":"Heidi J Huhtanen, Mikko J Nyman, Antti Karlsson, Jussi Hirvonen","doi":"10.1148/ryai.230620","DOIUrl":"https://doi.org/10.1148/ryai.230620","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and evaluate machine learning and deep learning-based models for automated protocoling of emergency brain MRI scans based on clinical referral text. Materials and Methods In this single-institution, retrospective study of 1953 emergency brain MRI referrals from January 2016 to January 2019, two neuroradiologists labeled the imaging protocol and use of contrast agent as the reference standard. Three machine learning algorithms (Naïve Bayes, support vector machine, and XGBoost) and two pretrained deep learning models (Finnish BERT and GPT-3.5) were developed to predict the MRI protocol and need for contrast agent. Each model was trained with three datasets (100% of training data, 50% of training data, and 50% + augmented training data). Prediction accuracy was assessed with test set. Results The GPT-3.5 models trained with 100% of the training data performed best in both tasks, achieving accuracy of 84% (95% CI: 80%-88%) for the correct protocol and 91% (95% CI: 88%-94%) for contrast. BERT had an accuracy of 78% (95% CI: 74%-82%) for the protocol and 89% (95% CI: 86%-92%) for contrast. The best machine learning model in the protocol task was XGBoost (accuracy 78% [95% CI: 73%-82%]), and in the contrast agent task support vector machine and XGBoost (accuracy 88% [95% CI: 84%-91%] for both). The accuracies of two nonneuroradiologists were 80%-83% in the protocol task and 89%-91% in the contrast medium task. Conclusion Machine learning and deep learning models demonstrated high performance in automatic protocoling emergency brain MRI scans based on text from clinical referrals. Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230620"},"PeriodicalIF":8.1,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marit A Martiniussen, Marthe Larsen, Tone Hovda, Merete U Kristiansen, Fredrik A Dahl, Line Eikvil, Olav Brautaset, Atle Bjørnerud, Vessela Kristensen, Marie B Bergan, Solveig Hofvind
Thomas Z Li, Kaiwen Xu, Aravind Krishnan, Riqiang Gao, Michael N Kammer, Sanja Antic, David Xiao, Michael Knight, Yency Martinez, Rafael Paez, Robert J Lentz, Stephen Deppen, Eric L Grogan, Thomas A Lasko, Kim L Sandler, Fabien Maldonado, Bennett A Landman
José Guilherme de Almeida, Nuno M Rodrigues, Ana Sofia Castro Verde, Ana Mascarenhas Gaivão, Carlos Bilreiro, Inês Santiago, Joana Ip, Sara Belião, Celso Matos, Sara Silva, Manolis Tsiknakis, Kostantinos Marias, Daniele Regge, Nikolaos Papanikolaou
{"title":"Impact of Scanner Manufacturer, Endorectal Coil Use, and Clinical Variables on Deep Learning-assisted Prostate Cancer Classification Using Multiparametric MRI.","authors":"José Guilherme de Almeida, Nuno M Rodrigues, Ana Sofia Castro Verde, Ana Mascarenhas Gaivão, Carlos Bilreiro, Inês Santiago, Joana Ip, Sara Belião, Celso Matos, Sara Silva, Manolis Tsiknakis, Kostantinos Marias, Daniele Regge, Nikolaos Papanikolaou","doi":"10.1148/ryai.230555","DOIUrl":"https://doi.org/10.1148/ryai.230555","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To assess the impact of scanner manufacturer and scan protocol on the performance of deep learning models to classify prostate cancer (PCa) aggressiveness on biparametric MRI (bpMRI). Materials and Methods In this retrospective study, 5,478 cases from ProstateNet, a PCa bpMRI dataset with examinations from 13 centers, were used to develop five deep learning (DL) models to predict PCa aggressiveness with minimal lesion information and test how using data from different subgroups-scanner manufacturers and endorectal coil (ERC) use (Siemens, Philips, GE with and without ERC and the full dataset)-impacts model performance. Performance was assessed using the area under the receiver operating characteristic curve (AUC). The impact of clinical features (age, prostate-specific antigen level, Prostate Imaging Reporting and Data System [PI-RADS] score) on model performance was also evaluated. Results DL models were trained on 4,328 bpMRI cases, and the best model achieved AUC = 0.73 when trained and tested using data from all manufacturers. Hold-out test set performance was higher when models trained with data from a manufacturer were tested on the same manufacturer (within-and between-manufacturer AUC differences of 0.05 on average, <i>P</i> < .001). The addition of clinical features did not improve performance (<i>P</i> = .24). Learning curve analyses showed that performance remained stable as training data increased. Analysis of DL features showed that scanner manufacturer and scan protocol heavily influenced feature distributions. Conclusion In automated classification of PCa aggressiveness using bpMRI data, scanner manufacturer and endorectal coil use had a major impact on DL model performance and features. Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230555"},"PeriodicalIF":8.1,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}