Radiology-Artificial Intelligence最新文献

筛选
英文 中文
2024 Manuscript Reviewers: A Note of Thanks.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-03-01 DOI: 10.1148/ryai.250163
Umar Mahmood, Charles E Kahn
{"title":"2024 Manuscript Reviewers: A Note of Thanks.","authors":"Umar Mahmood, Charles E Kahn","doi":"10.1148/ryai.250163","DOIUrl":"https://doi.org/10.1148/ryai.250163","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 2","pages":"e250163"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-based Aligned Strain from Cine Cardiac MRI for Detection of Fibrotic Myocardial Tissue in Patients with Duchenne Muscular Dystrophy.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-02-26 DOI: 10.1148/ryai.240303
Sven Koehler, Julian Kuhm, Tyler Huffaker, Daniel Young, Animesh Tandon, Florian André, Norbert Frey, Gerald Greil, Tarique Hussain, Sandy Engelhardt
{"title":"Deep Learning-based Aligned Strain from Cine Cardiac MRI for Detection of Fibrotic Myocardial Tissue in Patients with Duchenne Muscular Dystrophy.","authors":"Sven Koehler, Julian Kuhm, Tyler Huffaker, Daniel Young, Animesh Tandon, Florian André, Norbert Frey, Gerald Greil, Tarique Hussain, Sandy Engelhardt","doi":"10.1148/ryai.240303","DOIUrl":"10.1148/ryai.240303","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop a deep learning (DL) model that derives aligned strain values from cine (noncontrast) cardiac MRI and evaluate performance of these values to predict myocardial fibrosis in patients with Duchenne muscular dystrophy (DMD). Materials and Methods This retrospective study included 139 male patients with DMD who underwent cardiac MRI examinations at a single center between February 2018 and April 2023. A DL pipeline was developed to detect five key frames throughout the cardiac cycle, and respective dense deformation fields, allowing for phase-specific strain analysis across patients and from one key frame to the next. Effectiveness of these strain values in identifying abnormal deformations associated with fibrotic segments was evaluated in 57 patients (15.2 ± 3.1 years), and reproducibility was assessed in 82 patients (12.8 ± 2.7 years), comparing our method with existing feature-tracking and DL-based methods. Statistical analysis compared strain values using <i>t</i> tests, mixed models, and 2000+ ML models, reporting accuracy, F1 score, sensitivity, and specificity. Results DL-based aligned strain identified five times more differences (29 versus 5, <i>P</i> < .01) between fibrotic and nonfibrotic segments compared with traditional strain values and identified abnormal diastolic deformation patterns often missed by traditional methods. Additionally, aligned strain values enhanced performance of predictive models for myocardial fibrosis detection, improving specificity by 40%, overall accuracy by 17%, and accuracy patients with preserved ejection fraction by 61%. Conclusion The proposed aligned strain technique enables motion-based detection of myocardial dysfunction on contrast free cardiac MRI, facilitating detailed interpatient strain analysis, and allowing precise tracking of disease progression in DMD. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240303"},"PeriodicalIF":8.1,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143504686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Quantification of Serial PET/CT Images for Pediatric Hodgkin Lymphoma Using a Longitudinally Aware Segmentation Network.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-02-19 DOI: 10.1148/ryai.240229
Xin Tie, Muheon Shin, Changhee Lee, Scott B Perlman, Zachary Huemann, Amy J Weisman, Sharon M Castellino, Kara M Kelly, Kathleen M McCarten, Adina L Alazraki, Junjie Hu, Steve Y Cho, Tyler J Bradshaw
{"title":"Automatic Quantification of Serial PET/CT Images for Pediatric Hodgkin Lymphoma Using a Longitudinally Aware Segmentation Network.","authors":"Xin Tie, Muheon Shin, Changhee Lee, Scott B Perlman, Zachary Huemann, Amy J Weisman, Sharon M Castellino, Kara M Kelly, Kathleen M McCarten, Adina L Alazraki, Junjie Hu, Steve Y Cho, Tyler J Bradshaw","doi":"10.1148/ryai.240229","DOIUrl":"10.1148/ryai.240229","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop a longitudinally-aware segmentation network (LAS-Net) that can quantify serial PET/CT images for pediatric patients with Hodgkin lymphoma. Materials and Methods This retrospective study included baseline (PET1) and interim (PET2) PET/CT images from 297 pediatric patients enrolled in two Children's Oncology Group clinical trials (AHOD1331 and AHOD0831). The internal dataset included 200 patients (enrolled between March 2015-August 2019; median age, 15.4 [IQR: 5.6, 22.0] years; 107 male), and the external testing dataset included 97 patients (enrolled between December 2009-January 2012; median age, 15.8 [IQR: 5.2, 21.4] years; 59 male). LAS-Net incorporates longitudinal cross-attention, allowing relevant features from PET1 to inform the analysis of PET2. The model's lesion segmentation performance on PET1 images was evaluated using Dice coefficients and lesion detection performance on PET2 images was evaluated using F1 scores. Additionally, quantitative PET metrics, including metabolic tumor volume (MTV) and total lesion glycolysis (TLG) in PET1, as well as qPET and ∆SUVmax in PET2, were extracted and compared against physician-derived measurements. Agreement between model and physician-derived measurements was quantified using Spearman correlation, and bootstrap resampling was employed for statistical analysis. Results LAS-Net detected residual lymphoma on PET2 scans with an F1 score of 0.61 (precision/recall: 0.62/0.60), outperforming all comparator methods (<i>P</i> < .01). For baseline segmentation, LAS-Net achieved a mean Dice score of 0.77. In PET quantification, LAS-Net's measurements of qPET, ∆SUVmax, MTV and TLG were strongly correlated with physician measurements, with Spearman's ρ values of 0.78, 0.80, 0.93 and 0.96, respectively. The quantification performance remained high, with a slight decrease, in an external testing cohort. Conclusion LAS-Net demonstrated significant improvements in quantifying PET metrics across serial scans in pediatric patients with Hodgkin lymphoma, highlighting the value of longitudinal awareness in evaluating multitime-point imaging datasets. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240229"},"PeriodicalIF":8.1,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning and Deep Learning Models for Automated Protocoling of Emergency Brain MRI Using Text from Clinical Referrals.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-02-19 DOI: 10.1148/ryai.230620
Heidi J Huhtanen, Mikko J Nyman, Antti Karlsson, Jussi Hirvonen
{"title":"Machine Learning and Deep Learning Models for Automated Protocoling of Emergency Brain MRI Using Text from Clinical Referrals.","authors":"Heidi J Huhtanen, Mikko J Nyman, Antti Karlsson, Jussi Hirvonen","doi":"10.1148/ryai.230620","DOIUrl":"https://doi.org/10.1148/ryai.230620","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and evaluate machine learning and deep learning-based models for automated protocoling of emergency brain MRI scans based on clinical referral text. Materials and Methods In this single-institution, retrospective study of 1953 emergency brain MRI referrals from January 2016 to January 2019, two neuroradiologists labeled the imaging protocol and use of contrast agent as the reference standard. Three machine learning algorithms (Naïve Bayes, support vector machine, and XGBoost) and two pretrained deep learning models (Finnish BERT and GPT-3.5) were developed to predict the MRI protocol and need for contrast agent. Each model was trained with three datasets (100% of training data, 50% of training data, and 50% + augmented training data). Prediction accuracy was assessed with test set. Results The GPT-3.5 models trained with 100% of the training data performed best in both tasks, achieving accuracy of 84% (95% CI: 80%-88%) for the correct protocol and 91% (95% CI: 88%-94%) for contrast. BERT had an accuracy of 78% (95% CI: 74%-82%) for the protocol and 89% (95% CI: 86%-92%) for contrast. The best machine learning model in the protocol task was XGBoost (accuracy 78% [95% CI: 73%-82%]), and in the contrast agent task support vector machine and XGBoost (accuracy 88% [95% CI: 84%-91%] for both). The accuracies of two nonneuroradiologists were 80%-83% in the protocol task and 89%-91% in the contrast medium task. Conclusion Machine learning and deep learning models demonstrated high performance in automatic protocoling emergency brain MRI scans based on text from clinical referrals. Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230620"},"PeriodicalIF":8.1,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of Two Deep Learning-based AI Models for Breast Cancer Detection and Localization on Screening Mammograms from BreastScreen Norway.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-02-05 DOI: 10.1148/ryai.240039
Marit A Martiniussen, Marthe Larsen, Tone Hovda, Merete U Kristiansen, Fredrik A Dahl, Line Eikvil, Olav Brautaset, Atle Bjørnerud, Vessela Kristensen, Marie B Bergan, Solveig Hofvind
{"title":"Performance of Two Deep Learning-based AI Models for Breast Cancer Detection and Localization on Screening Mammograms from BreastScreen Norway.","authors":"Marit A Martiniussen, Marthe Larsen, Tone Hovda, Merete U Kristiansen, Fredrik A Dahl, Line Eikvil, Olav Brautaset, Atle Bjørnerud, Vessela Kristensen, Marie B Bergan, Solveig Hofvind","doi":"10.1148/ryai.240039","DOIUrl":"https://doi.org/10.1148/ryai.240039","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate cancer detection and marker placement accuracy of two artificial intelligence (AI) models developed for interpretation of screening mammograms. Materials and Methods This retrospective study included data from 129 434 screening examinations (all female, mean age 59.2, SD = 5.8) performed between January 2008 and December 2018 in BreastScreen Norway. Model A was commercially available and B was an in-house model. Area under the receiver operating characteristic curve (AUC) with 95% confidence interval (CIs) were calculated. The study defined 3.2% and 11.1% of the examinations with the highest AI scores as positive, threshold 1 and 2, respectively. A radiologic review assessed location of AI markings and classified interval cancers as true or false negative. Results The AUC was 0.93 (95% CI: 0.92-0.94) for model A and B when including screen-detected and interval cancers. Model A identified 82.5% (611/741) of the screen-detected cancers at threshold 1 and 92.4% (685/741) at threshold 2. For model B, the numbers were 81.8% (606/741) and 93.7% (694/741), respectively. The AI markings were correctly localized for all screen-detected cancers identified by both models and 82% (56/68) of the interval cancers for model A and 79% (54/68) for B. At the review, 21.6% (45/208) of the interval cancers were identified at the preceding screening by either or both models, correctly localized and classified as false negative (<i>n</i> = 17) or with minimal signs of malignancy (<i>n</i> = 28). Conclusion Both AI models showed promising performance for cancer detection on screening mammograms. The AI markings corresponded well to the true cancer locations. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240039"},"PeriodicalIF":8.1,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143190743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Serial MRI-based Deep Learning Model to Predict Survival in Patients with Locoregionally Advanced Nasopharyngeal Carcinoma. 基于序列mri的深度学习模型预测局部区域晚期鼻咽癌患者的生存。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-02-01 DOI: 10.1148/ryai.230544
Jia Kou, Jun-Yi Peng, Wen-Bing Lv, Chen-Fei Wu, Zi-Hang Chen, Guan-Qun Zhou, Ya-Qin Wang, Li Lin, Li-Jun Lu, Ying Sun
{"title":"A Serial MRI-based Deep Learning Model to Predict Survival in Patients with Locoregionally Advanced Nasopharyngeal Carcinoma.","authors":"Jia Kou, Jun-Yi Peng, Wen-Bing Lv, Chen-Fei Wu, Zi-Hang Chen, Guan-Qun Zhou, Ya-Qin Wang, Li Lin, Li-Jun Lu, Ying Sun","doi":"10.1148/ryai.230544","DOIUrl":"10.1148/ryai.230544","url":null,"abstract":"<p><p>Purpose To develop and evaluate a deep learning-based prognostic model for predicting survival in locoregionally advanced nasopharyngeal carcinoma (LA-NPC) using serial MRI before and after induction chemotherapy (IC). Materials and Methods This multicenter retrospective study included 1039 patients with LA-NPC (779 male and 260 female patients; mean age, 44 years ± 11 [SD]) diagnosed between December 2011 and January 2016. A radiomics-clinical prognostic model (model RC) was developed using pre- and post-IC MRI acquisitions and other clinical factors using graph convolutional neural networks. The concordance index (C-index) was used to evaluate model performance in predicting disease-free survival (DFS). The survival benefits of concurrent chemoradiation therapy (CCRT) were analyzed in model-defined risk groups. Results The C-indexes of model RC for predicting DFS were significantly higher than those of TNM staging in the internal (0.79 vs 0.53) and external (0.79 vs 0.62, both <i>P</i> < .001) testing cohorts. The 5-year DFS for the model RC-defined low-risk group was significantly better than that of the high-risk group (90.6% vs 58.9%, <i>P</i> < .001). In high-risk patients, those who underwent CCRT had a higher 5-year DFS rate than those who did not (58.7% vs 28.6%, <i>P</i> = .03). There was no evidence of a difference in 5-year DFS rate in low-risk patients who did or did not undergo CCRT (91.9% vs 81.3%, <i>P</i> = .19). Conclusion Serial MRI before and after IC can effectively help predict survival in LA-NPC. The radiomics-clinical prognostic model developed using a graph convolutional network-based deep learning method showed good risk discrimination capabilities and may facilitate risk-adapted therapy. <b>Keywords:</b> Nasopharyngeal Carcinoma, Deep Learning, Induction Chemotherapy, Serial MRI, MR Imaging, Radiomics, Prognosis, Radiation Therapy/Oncology, Head/Neck <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230544"},"PeriodicalIF":8.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the Impact of Changes in Artificial Intelligence-derived Case Scores over Time on Digital Breast Tomosynthesis Screening Outcomes. 评估人工智能病例评分随时间推移的变化对数字乳腺断层合成筛查结果的影响。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-02-01 DOI: 10.1148/ryai.230597
Samantha P Zuckerman, Senthil Periaswamy, Julie L Shisler, Ameena Elahi, Christine E Edmonds, Jeffrey Hoffmeister, Emily F Conant
{"title":"Evaluating the Impact of Changes in Artificial Intelligence-derived Case Scores over Time on Digital Breast Tomosynthesis Screening Outcomes.","authors":"Samantha P Zuckerman, Senthil Periaswamy, Julie L Shisler, Ameena Elahi, Christine E Edmonds, Jeffrey Hoffmeister, Emily F Conant","doi":"10.1148/ryai.230597","DOIUrl":"10.1148/ryai.230597","url":null,"abstract":"<p><p>Purpose To evaluate the change in digital breast tomosynthesis artificial intelligence (DBT-AI) case scores over sequential screenings. Materials and Methods This retrospective review included 21 108 female patients (mean age ± SD, 58.1 years ± 11.5) with 31 741 DBT screening examinations performed at a single site from February 3, 2020, to September 12, 2022. Among 7000 patients with two or more DBT-AI screenings, 1799 had a 1-year follow-up and were included in the analysis. DBT-AI case scores and differences in case score over time were determined. Case scores ranged from 0 to 100. For each screening outcome (true positive [TP], false positive [FP], true negative [TN], false negative [FN]), mean and median case score change was calculated. Results The highest average case score was seen in TP examinations (average, 75; range, 7-100; <i>n</i> = 41), and the lowest average case score was seen in TN examinations (average, 34; range, 0-100; <i>n</i> = 1640). The largest positive case score change was seen in TP examinations (mean case score change, 21.1; median case score change, 17). FN examinations included mammographically occult cancers diagnosed following supplemental screening and those found at symptomatic diagnostic imaging. Differences between TP and TN mean case score change (<i>P</i> < .001) and between TP and FP mean case score change (<i>P</i> = .02) were statistically significant. Conclusion Using the combination of DBT AI case score with change in case score over time may help radiologists make recall decisions in DBT screening. All studies with high case score and/or case score changes should be carefully scrutinized to maximize screening performance. <b>Keywords:</b> Mammography, Breast, Computer Aided Diagnosis (CAD) <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230597"},"PeriodicalIF":8.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Scanner Manufacturer, Endorectal Coil Use, and Clinical Variables on Deep Learning-assisted Prostate Cancer Classification Using Multiparametric MRI. 扫描仪制造商、直肠内线圈使用和临床变量对多参数MRI深度学习辅助前列腺癌分类的影响。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-22 DOI: 10.1148/ryai.230555
José Guilherme de Almeida, Nuno M Rodrigues, Ana Sofia Castro Verde, Ana Mascarenhas Gaivão, Carlos Bilreiro, Inês Santiago, Joana Ip, Sara Belião, Celso Matos, Sara Silva, Manolis Tsiknakis, Kostantinos Marias, Daniele Regge, Nikolaos Papanikolaou
{"title":"Impact of Scanner Manufacturer, Endorectal Coil Use, and Clinical Variables on Deep Learning-assisted Prostate Cancer Classification Using Multiparametric MRI.","authors":"José Guilherme de Almeida, Nuno M Rodrigues, Ana Sofia Castro Verde, Ana Mascarenhas Gaivão, Carlos Bilreiro, Inês Santiago, Joana Ip, Sara Belião, Celso Matos, Sara Silva, Manolis Tsiknakis, Kostantinos Marias, Daniele Regge, Nikolaos Papanikolaou","doi":"10.1148/ryai.230555","DOIUrl":"https://doi.org/10.1148/ryai.230555","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To assess the impact of scanner manufacturer and scan protocol on the performance of deep learning models to classify prostate cancer (PCa) aggressiveness on biparametric MRI (bpMRI). Materials and Methods In this retrospective study, 5,478 cases from ProstateNet, a PCa bpMRI dataset with examinations from 13 centers, were used to develop five deep learning (DL) models to predict PCa aggressiveness with minimal lesion information and test how using data from different subgroups-scanner manufacturers and endorectal coil (ERC) use (Siemens, Philips, GE with and without ERC and the full dataset)-impacts model performance. Performance was assessed using the area under the receiver operating characteristic curve (AUC). The impact of clinical features (age, prostate-specific antigen level, Prostate Imaging Reporting and Data System [PI-RADS] score) on model performance was also evaluated. Results DL models were trained on 4,328 bpMRI cases, and the best model achieved AUC = 0.73 when trained and tested using data from all manufacturers. Hold-out test set performance was higher when models trained with data from a manufacturer were tested on the same manufacturer (within-and between-manufacturer AUC differences of 0.05 on average, <i>P</i> < .001). The addition of clinical features did not improve performance (<i>P</i> = .24). Learning curve analyses showed that performance remained stable as training data increased. Analysis of DL features showed that scanner manufacturer and scan protocol heavily influenced feature distributions. Conclusion In automated classification of PCa aggressiveness using bpMRI data, scanner manufacturer and endorectal coil use had a major impact on DL model performance and features. Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230555"},"PeriodicalIF":8.1,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RSNA 2023 Abdominal Trauma AI Challenge: Review and Outcomes. RSNA 2023 腹部创伤人工智能挑战回顾与结果分析。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-01 DOI: 10.1148/ryai.240334
Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, George Shih, John Mongan, Savvas Nicolaou, Brett S Marinelli, Melissa A Davis, Kirti Magudia, Ervin Sejdić, Errol Colak
{"title":"RSNA 2023 Abdominal Trauma AI Challenge: Review and Outcomes.","authors":"Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, George Shih, John Mongan, Savvas Nicolaou, Brett S Marinelli, Melissa A Davis, Kirti Magudia, Ervin Sejdić, Errol Colak","doi":"10.1148/ryai.240334","DOIUrl":"10.1148/ryai.240334","url":null,"abstract":"<p><p>Purpose To evaluate the performance of the winning machine learning models from the 2023 RSNA Abdominal Trauma Detection AI Challenge. Materials and Methods The competition was hosted on Kaggle and took place between July 26 and October 15, 2023. The multicenter competition dataset consisted of 4274 abdominal trauma CT scans, in which solid organs (liver, spleen, and kidneys) were annotated as healthy, low-grade, or high-grade injury. Studies were labeled as positive or negative for the presence of bowel and mesenteric injury and active extravasation. In this study, performances of the eight award-winning models were retrospectively assessed and compared using various metrics, including the area under the receiver operating characteristic curve (AUC), for each injury category. The reported mean values of these metrics were calculated by averaging the performance across all models for each specified injury type. Results The models exhibited strong performance in detecting solid organ injuries, particularly high-grade injuries. For binary detection of injuries, the models demonstrated mean AUC values of 0.92 (range, 0.90-0.94) for liver, 0.91 (range, 0.87-0.93) for splenic, and 0.94 (range, 0.93-0.95) for kidney injuries. The models achieved mean AUC values of 0.98 (range, 0.96-0.98) for high-grade liver, 0.98 (range, 0.97-0.99) for high-grade splenic, and 0.98 (range, 0.97-0.98) for high-grade kidney injuries. For the detection of bowel and mesenteric injuries and active extravasation, the models demonstrated mean AUC values of 0.85 (range, 0.74-0.93) and 0.85 (range, 0.79-0.89), respectively. Conclusion The award-winning models from the artificial intelligence challenge demonstrated strong performance in the detection of traumatic abdominal injuries on CT scans, particularly high-grade injuries. These models may serve as a performance baseline for future investigations and algorithms. <b>Keywords:</b> Abdominal Trauma, CT, American Association for the Surgery of Trauma, Machine Learning, Artificial Intelligence <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240334"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Artificial Intelligence to Improve Diagnosis of Unruptured Intracranial Aneurysms. 利用人工智能改进未破裂颅内动脉瘤的诊断。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-01 DOI: 10.1148/ryai.240696
Shuncong Wang
{"title":"Using Artificial Intelligence to Improve Diagnosis of Unruptured Intracranial Aneurysms.","authors":"Shuncong Wang","doi":"10.1148/ryai.240696","DOIUrl":"https://doi.org/10.1148/ryai.240696","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 1","pages":"e240696"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信