Radiology-Artificial Intelligence最新文献

筛选
英文 中文
The RSNA Abdominal Traumatic Injury CT (RATIC) Dataset. RSNA 腹部创伤 CT (RATIC) 数据集。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-11-01 DOI: 10.1148/ryai.240101
Jeffrey D Rudie, Hui-Ming Lin, Robyn L Ball, Sabeena Jalal, Luciano M Prevedello, Savvas Nicolaou, Brett S Marinelli, Adam E Flanders, Kirti Magudia, George Shih, Melissa A Davis, John Mongan, Peter D Chang, Ferco H Berger, Sebastiaan Hermans, Meng Law, Tyler Richards, Jan-Peter Grunz, Andreas Steven Kunz, Shobhit Mathur, Sandro Galea-Soler, Andrew D Chung, Saif Afat, Chin-Chi Kuo, Layal Aweidah, Ana Villanueva Campos, Arjuna Somasundaram, Felipe Antonio Sanchez Tijmes, Attaporn Jantarangkoon, Leonardo Kayat Bittencourt, Michael Brassil, Ayoub El Hajjami, Hakan Dogan, Muris Becircic, Agrahara G Bharatkumar, Eduardo Moreno Júdice de Mattos Farina, Errol Colak
{"title":"The RSNA Abdominal Traumatic Injury CT (RATIC) Dataset.","authors":"Jeffrey D Rudie, Hui-Ming Lin, Robyn L Ball, Sabeena Jalal, Luciano M Prevedello, Savvas Nicolaou, Brett S Marinelli, Adam E Flanders, Kirti Magudia, George Shih, Melissa A Davis, John Mongan, Peter D Chang, Ferco H Berger, Sebastiaan Hermans, Meng Law, Tyler Richards, Jan-Peter Grunz, Andreas Steven Kunz, Shobhit Mathur, Sandro Galea-Soler, Andrew D Chung, Saif Afat, Chin-Chi Kuo, Layal Aweidah, Ana Villanueva Campos, Arjuna Somasundaram, Felipe Antonio Sanchez Tijmes, Attaporn Jantarangkoon, Leonardo Kayat Bittencourt, Michael Brassil, Ayoub El Hajjami, Hakan Dogan, Muris Becircic, Agrahara G Bharatkumar, Eduardo Moreno Júdice de Mattos Farina, Errol Colak","doi":"10.1148/ryai.240101","DOIUrl":"10.1148/ryai.240101","url":null,"abstract":"<p><p>\u0000 <i>Supplemental material is available for this article.</i>\u0000 </p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240101"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11605137/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Watch Your Back! How Deep Learning Is Cracking the Real World of CT for Cervical Spine Fractures. 小心背后!深度学习如何破解颈椎骨折 CT 的真实世界。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-11-01 DOI: 10.1148/ryai.240604
Riccardo Levi, Letterio S Politi
{"title":"Watch Your Back! How Deep Learning Is Cracking the Real World of CT for Cervical Spine Fractures.","authors":"Riccardo Levi, Letterio S Politi","doi":"10.1148/ryai.240604","DOIUrl":"10.1148/ryai.240604","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 6","pages":"e240604"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11605139/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142733028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-integrated Screening to Replace Double Reading of Mammograms: A Population-wide Accuracy and Feasibility Study. 人工智能整合筛查取代乳房 X 光片双读:全人口准确性和可行性研究。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-11-01 DOI: 10.1148/ryai.230529
Mohammad T Elhakim, Sarah W Stougaard, Ole Graumann, Mads Nielsen, Oke Gerke, Lisbet B Larsen, Benjamin S B Rasmussen
{"title":"AI-integrated Screening to Replace Double Reading of Mammograms: A Population-wide Accuracy and Feasibility Study.","authors":"Mohammad T Elhakim, Sarah W Stougaard, Ole Graumann, Mads Nielsen, Oke Gerke, Lisbet B Larsen, Benjamin S B Rasmussen","doi":"10.1148/ryai.230529","DOIUrl":"10.1148/ryai.230529","url":null,"abstract":"<p><p>Mammography screening supported by deep learning-based artificial intelligence (AI) solutions can potentially reduce workload without compromising breast cancer detection accuracy, but the site of deployment in the workflow might be crucial. This retrospective study compared three simulated AI-integrated screening scenarios with standard double reading with arbitration in a sample of 249 402 mammograms from a representative screening population. A commercial AI system replaced the first reader (scenario 1: integrated AI<sub>first</sub>), the second reader (scenario 2: integrated AI<sub>second</sub>), or both readers for triaging of low- and high-risk cases (scenario 3: integrated AI<sub>triage</sub>). AI threshold values were chosen based partly on previous validation and setting the screen-read volume reduction at approximately 50% across scenarios. Detection accuracy measures were calculated. Compared with standard double reading, integrated AI<sub>first</sub> showed no evidence of a difference in accuracy metrics except for a higher arbitration rate (+0.99%, <i>P</i> < .001). Integrated AI<sub>second</sub> had lower sensitivity (-1.58%, <i>P</i> < .001), negative predictive value (NPV) (-0.01%, <i>P</i> < .001), and recall rate (-0.06%, <i>P</i> = .04) but a higher positive predictive value (PPV) (+0.03%, <i>P</i> < .001) and arbitration rate (+1.22%, <i>P</i> < .001). Integrated AI<sub>triage</sub> achieved higher sensitivity (+1.33%, <i>P</i> < .001), PPV (+0.36%, <i>P</i> = .03), and NPV (+0.01%, <i>P</i> < .001) but lower arbitration rate (-0.88%, <i>P</i> < .001). Replacing one or both readers with AI seems feasible; however, the site of application in the workflow can have clinically relevant effects on accuracy and workload. <b>Keywords:</b> Mammography, Breast, Neoplasms-Primary, Screening, Epidemiology, Diagnosis, Convolutional Neural Network (CNN) <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230529"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11605135/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142126863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing the Performance of Models from the 2022 RSNA Cervical Spine Fracture Detection Competition at a Level I Trauma Center. 评估 2022 年 RSNA 颈椎骨折检测竞赛模型在一级创伤中心的性能。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-11-01 DOI: 10.1148/ryai.230550
Zixuan Hu, Markand Patel, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Mitra Naseri, Shobhit Mathur, Robert Moreland, Jefferson Wilson, Christopher Witiw, Kristen W Yeom, Qishen Ha, Darragh Hanley, Selim Seferbekov, Hao Chen, Philipp Singer, Christof Henkel, Pascal Pfeiffer, Ian Pan, Harshit Sheoran, Wuqi Li, Adam E Flanders, Felipe C Kitamura, Tyler Richards, Jason Talbott, Ervin Sejdić, Errol Colak
{"title":"Assessing the Performance of Models from the 2022 RSNA Cervical Spine Fracture Detection Competition at a Level I Trauma Center.","authors":"Zixuan Hu, Markand Patel, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Mitra Naseri, Shobhit Mathur, Robert Moreland, Jefferson Wilson, Christopher Witiw, Kristen W Yeom, Qishen Ha, Darragh Hanley, Selim Seferbekov, Hao Chen, Philipp Singer, Christof Henkel, Pascal Pfeiffer, Ian Pan, Harshit Sheoran, Wuqi Li, Adam E Flanders, Felipe C Kitamura, Tyler Richards, Jason Talbott, Ervin Sejdić, Errol Colak","doi":"10.1148/ryai.230550","DOIUrl":"10.1148/ryai.230550","url":null,"abstract":"<p><p>Purpose To evaluate the performance of the top models from the RSNA 2022 Cervical Spine Fracture Detection challenge on a clinical test dataset of both noncontrast and contrast-enhanced CT scans acquired at a level I trauma center. Materials and Methods Seven top-performing models in the RSNA 2022 Cervical Spine Fracture Detection challenge were retrospectively evaluated on a clinical test set of 1828 CT scans (from 1829 series: 130 positive for fracture, 1699 negative for fracture; 1308 noncontrast, 521 contrast enhanced) from 1779 patients (mean age, 55.8 years ± 22.1 [SD]; 1154 [64.9%] male patients). Scans were acquired without exclusion criteria over 1 year (January-December 2022) from the emergency department of a neurosurgical and level I trauma center. Model performance was assessed using area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. False-positive and false-negative cases were further analyzed by a neuroradiologist. Results Although all seven models showed decreased performance on the clinical test set compared with the challenge dataset, the models maintained high performances. On noncontrast CT scans, the models achieved a mean AUC of 0.89 (range: 0.79-0.92), sensitivity of 67.0% (range: 30.9%-80.0%), and specificity of 92.9% (range: 82.1%-99.0%). On contrast-enhanced CT scans, the models had a mean AUC of 0.88 (range: 0.76-0.94), sensitivity of 81.9% (range: 42.7%-100.0%), and specificity of 72.1% (range: 16.4%-92.8%). The models identified 10 fractures missed by radiologists. False-positive cases were more common in contrast-enhanced scans and observed in patients with degenerative changes on noncontrast scans, while false-negative cases were often associated with degenerative changes and osteopenia. Conclusion The winning models from the 2022 RSNA AI Challenge demonstrated a high performance for cervical spine fracture detection on a clinical test dataset, warranting further evaluation for their use as clinical support tools. <b>Keywords:</b> Feature Detection, Supervised Learning, Convolutional Neural Network (CNN), Genetic Algorithms, CT, Spine, Technology Assessment, Head/Neck <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also commentary by Levi and Politi in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230550"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11605142/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI as a Second Reader Can Reduce Radiologists' Workload and Increase Accuracy in Screening Mammography. 人工智能作为第二阅读器可减轻放射医师的工作量并提高乳腺 X 射线摄影筛查的准确性。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-11-01 DOI: 10.1148/ryai.240624
Abhinav Suri
{"title":"AI as a Second Reader Can Reduce Radiologists' Workload and Increase Accuracy in Screening Mammography.","authors":"Abhinav Suri","doi":"10.1148/ryai.240624","DOIUrl":"10.1148/ryai.240624","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 6","pages":"e240624"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11605140/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformers in the Womb: Swin-UNETR Takes on Fetal Brain Imaging. 子宫里的变形金刚Swin-UNETR 对胎儿大脑成像的研究。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-11-01 DOI: 10.1148/ryai.240677
Sanjay P Prabhu
{"title":"Transformers in the Womb: Swin-UNETR Takes on Fetal Brain Imaging.","authors":"Sanjay P Prabhu","doi":"10.1148/ryai.240677","DOIUrl":"10.1148/ryai.240677","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 6","pages":"e240677"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11605138/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142628786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Performance of Transformer-based Models for Fetal Brain MR Image Segmentation. 优化基于变压器模型的胎儿脑磁共振图像分割性能
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-11-01 DOI: 10.1148/ryai.230229
Nicolò Pecco, Pasquale Anthony Della Rosa, Matteo Canini, Gianluca Nocera, Paola Scifo, Paolo Ivo Cavoretto, Massimo Candiani, Andrea Falini, Antonella Castellano, Cristina Baldoli
{"title":"Optimizing Performance of Transformer-based Models for Fetal Brain MR Image Segmentation.","authors":"Nicolò Pecco, Pasquale Anthony Della Rosa, Matteo Canini, Gianluca Nocera, Paola Scifo, Paolo Ivo Cavoretto, Massimo Candiani, Andrea Falini, Antonella Castellano, Cristina Baldoli","doi":"10.1148/ryai.230229","DOIUrl":"10.1148/ryai.230229","url":null,"abstract":"<p><p>Purpose To test the performance of a transformer-based model when manipulating pretraining weights, dataset size, and input size and comparing the best model with the reference standard and state-of-the-art models for a resting-state functional (rs-fMRI) fetal brain extraction task. Materials and Methods An internal retrospective dataset (172 fetuses, 519 images; collected 2018-2022) was used to investigate influence of dataset size, pretraining approaches, and image input size on Swin-U-Net transformer (UNETR) and UNETR models. The internal and external (131 fetuses, 561 images) datasets were used to cross-validate and to assess generalization capability of the best model versus state-of-the-art models on different scanner types and number of gestational weeks (GWs). The Dice similarity coefficient (DSC) and the balanced average Hausdorff distance (BAHD) were used as segmentation performance metrics. Generalized equation estimation multifactorial models were used to assess significant model and interaction effects of interest. Results The Swin-UNETR model was not affected by the pretraining approach and dataset size and performed best with the mean dataset image size, with a mean DSC of 0.92 and BAHD of 0.097. Swin-UNETR was not affected by scanner type. Generalization results on the internal dataset showed that Swin-UNETR had lower performance compared with the reference standard models and comparable performance on the external dataset. Cross-validation on internal and external test sets demonstrated better and comparable performance of Swin-UNETR versus convolutional neural network architectures during the late-fetal period (GWs > 25) but lower performance during the midfetal period (GWs ≤ 25). Conclusion Swin-UNTER showed flexibility in dealing with smaller datasets, regardless of pretraining approaches. For fetal brain extraction from rs-fMR images, Swin-UNTER showed comparable performance with that of reference standard models during the late-fetal period and lower performance during the early GW period. <b>Keywords:</b> Transformers, CNN, Medical Imaging Segmentation, MRI, Dataset Size, Input Size, Transfer Learning <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230229"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11605146/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141451658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
External Testing of a Deep Learning Model to Estimate Biologic Age Using Chest Radiographs. 利用胸片估算生物年龄的深度学习模型的外部测试。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-09-01 DOI: 10.1148/ryai.230433
Jong Hyuk Lee, Dongheon Lee, Michael T Lu, Vineet K Raghu, Jin Mo Goo, Yunhee Choi, Seung Ho Choi, Hyungjin Kim
{"title":"External Testing of a Deep Learning Model to Estimate Biologic Age Using Chest Radiographs.","authors":"Jong Hyuk Lee, Dongheon Lee, Michael T Lu, Vineet K Raghu, Jin Mo Goo, Yunhee Choi, Seung Ho Choi, Hyungjin Kim","doi":"10.1148/ryai.230433","DOIUrl":"10.1148/ryai.230433","url":null,"abstract":"<p><p>Purpose To assess the prognostic value of a deep learning-based chest radiographic age (hereafter, CXR-Age) model in a large external test cohort of Asian individuals. Materials and Methods This single-center, retrospective study included chest radiographs from consecutive, asymptomatic Asian individuals aged 50-80 years who underwent health checkups between January 2004 and June 2018. This study performed a dedicated external test of a previously developed CXR-Age model, which predicts an age adjusted based on the risk of all-cause mortality. Adjusted hazard ratios (HRs) of CXR-Age for all-cause, cardiovascular, lung cancer, and respiratory disease mortality were assessed using multivariable Cox or Fine-Gray models, and their added values were evaluated by likelihood ratio tests. Results A total of 36 924 individuals (mean chronological age, 58 years ± 7 [SD]; CXR-Age, 60 years ± 5; 22 352 male) were included. During a median follow-up of 11.0 years, 1250 individuals (3.4%) died, including 153 cardiovascular (0.4%), 166 lung cancer (0.4%), and 98 respiratory (0.3%) deaths. CXR-Age was a significant risk factor for all-cause (adjusted HR at chronological age of 50 years, 1.03; at 60 years, 1.05; at 70 years, 1.07), cardiovascular (adjusted HR, 1.11), lung cancer (adjusted HR for individuals who formerly smoked, 1.12; for those who currently smoke, 1.05), and respiratory disease (adjusted HR, 1.12) mortality (<i>P</i> < .05 for all). The likelihood ratio test demonstrated added prognostic value of CXR-Age to clinical factors, including chronological age for all outcomes (<i>P</i> < .001 for all). Conclusion Deep learning-based chest radiographic age was associated with various survival outcomes and had added value to clinical factors in asymptomatic Asian individuals, suggesting its generalizability. <b>Keywords:</b> Conventional Radiography, Thorax, Heart, Lung, Mediastinum, Outcomes Analysis, Quantification, Prognosis, Convolutional Neural Network (CNN) <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also the commentary by Adams and Bressem in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230433"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427929/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141752995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-based Unsupervised Domain Adaptation via a Unified Model for Prostate Lesion Detection Using Multisite Biparametric MRI Datasets. 利用多部位双参数磁共振成像数据集,通过统一模型进行前列腺病变检测的基于深度学习的无监督领域适应。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-09-01 DOI: 10.1148/ryai.230521
Hao Li, Han Liu, Heinrich von Busch, Robert Grimm, Henkjan Huisman, Angela Tong, David Winkel, Tobias Penzkofer, Ivan Shabunin, Moon Hyung Choi, Qingsong Yang, Dieter Szolar, Steven Shea, Fergus Coakley, Mukesh Harisinghani, Ipek Oguz, Dorin Comaniciu, Ali Kamen, Bin Lou
{"title":"Deep Learning-based Unsupervised Domain Adaptation via a Unified Model for Prostate Lesion Detection Using Multisite Biparametric MRI Datasets.","authors":"Hao Li, Han Liu, Heinrich von Busch, Robert Grimm, Henkjan Huisman, Angela Tong, David Winkel, Tobias Penzkofer, Ivan Shabunin, Moon Hyung Choi, Qingsong Yang, Dieter Szolar, Steven Shea, Fergus Coakley, Mukesh Harisinghani, Ipek Oguz, Dorin Comaniciu, Ali Kamen, Bin Lou","doi":"10.1148/ryai.230521","DOIUrl":"10.1148/ryai.230521","url":null,"abstract":"<p><p>Purpose To determine whether the unsupervised domain adaptation (UDA) method with generated images improves the performance of a supervised learning (SL) model for prostate cancer (PCa) detection using multisite biparametric (bp) MRI datasets. Materials and Methods This retrospective study included data from 5150 patients (14 191 samples) collected across nine different imaging centers. A novel UDA method using a unified generative model was developed for PCa detection using multisite bpMRI datasets. This method translates diffusion-weighted imaging (DWI) acquisitions, including apparent diffusion coefficient (ADC) and individual diffusion-weighted (DW) images acquired using various <i>b</i> values, to align with the style of images acquired using <i>b</i> values recommended by Prostate Imaging Reporting and Data System (PI-RADS) guidelines. The generated ADC and DW images replace the original images for PCa detection. An independent set of 1692 test cases (2393 samples) was used for evaluation. The area under the receiver operating characteristic curve (AUC) was used as the primary metric, and statistical analysis was performed via bootstrapping. Results For all test cases, the AUC values for baseline SL and UDA methods were 0.73 and 0.79 (<i>P</i> < .001), respectively, for PCa lesions with PI-RADS score of 3 or greater and 0.77 and 0.80 (<i>P</i> < .001) for lesions with PI-RADS scores of 4 or greater. In the 361 test cases under the most unfavorable image acquisition setting, the AUC values for baseline SL and UDA were 0.49 and 0.76 (<i>P</i> < .001) for lesions with PI-RADS scores of 3 or greater and 0.50 and 0.77 (<i>P</i> < .001) for lesions with PI-RADS scores of 4 or greater. Conclusion UDA with generated images improved the performance of SL methods in PCa lesion detection across multisite datasets with various <i>b</i> values, especially for images acquired with significant deviations from the PI-RADS-recommended DWI protocol (eg, with an extremely high <i>b</i> value). <b>Keywords:</b> Prostate Cancer Detection, Multisite, Unsupervised Domain Adaptation, Diffusion-weighted Imaging, <i>b</i> Value <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230521"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11449150/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142018898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
nnU-Net-based Segmentation of Tumor Subcompartments in Pediatric Medulloblastoma Using Multiparametric MRI: A Multi-institutional Study. 基于 Nn-Unet 的多参数磁共振成像对小儿髓母细胞瘤肿瘤亚区的分割:一项多机构研究
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-09-01 DOI: 10.1148/ryai.230115
Rohan Bareja, Marwa Ismail, Douglas Martin, Ameya Nayate, Ipsa Yadav, Murad Labbad, Prateek Dullur, Sanya Garg, Benita Tamrazi, Ralph Salloum, Ashley Margol, Alexander Judkins, Sukanya Iyer, Peter de Blank, Pallavi Tiwari
{"title":"nnU-Net-based Segmentation of Tumor Subcompartments in Pediatric Medulloblastoma Using Multiparametric MRI: A Multi-institutional Study.","authors":"Rohan Bareja, Marwa Ismail, Douglas Martin, Ameya Nayate, Ipsa Yadav, Murad Labbad, Prateek Dullur, Sanya Garg, Benita Tamrazi, Ralph Salloum, Ashley Margol, Alexander Judkins, Sukanya Iyer, Peter de Blank, Pallavi Tiwari","doi":"10.1148/ryai.230115","DOIUrl":"10.1148/ryai.230115","url":null,"abstract":"<p><p>Purpose To evaluate nnU-Net-based segmentation models for automated delineation of medulloblastoma tumors on multi-institutional MRI scans. Materials and Methods This retrospective study included 78 pediatric patients (52 male, 26 female), with ages ranging from 2 to 18 years, with medulloblastomas, from three different sites (28 from hospital A, 18 from hospital B, and 32 from hospital C), who had data available from three clinical MRI protocols (gadolinium-enhanced T1-weighted, T2-weighted, and fluid-attenuated inversion recovery). The scans were retrospectively collected from the year 2000 until May 2019. Reference standard annotations of the tumor habitat, including enhancing tumor, edema, and cystic core plus nonenhancing tumor subcompartments, were performed by two experienced neuroradiologists. Preprocessing included registration to age-appropriate atlases, skull stripping, bias correction, and intensity matching. The two models were trained as follows: <i>(a)</i> the transfer learning nnU-Net model was pretrained on an adult glioma cohort (<i>n</i> = 484) and fine-tuned on medulloblastoma studies using Models Genesis and <i>(b)</i> the direct deep learning nnU-Net model was trained directly on the medulloblastoma datasets, across fivefold cross-validation. Model robustness was evaluated on the three datasets when using different combinations of training and test sets, with data from two sites at a time used for training and data from the third site used for testing. Results Analysis on the three test sites yielded Dice scores of 0.81, 0.86, and 0.86 and 0.80, 0.86, and 0.85 for tumor habitat; 0.68, 0.84, and 0.77 and 0.67, 0.83, and 0.76 for enhancing tumor; 0.56, 0.71, and 0.69 and 0.56, 0.71, and 0.70 for edema; and 0.32, 0.48, and 0.43 and 0.29, 0.44, and 0.41 for cystic core plus nonenhancing tumor for the transfer learning and direct nnU-Net models, respectively. The models were largely robust to site-specific variations. Conclusion nnU-Net segmentation models hold promise for accurate, robust automated delineation of medulloblastoma tumor subcompartments, potentially leading to more effective radiation therapy planning in pediatric medulloblastoma. <b>Keywords:</b> Pediatrics, MR Imaging, Segmentation, Transfer Learning, Medulloblastoma, nnU-Net, MRI <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also the commentary by Rudie and Correia de Verdier in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230115"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427926/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142018900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信