Radiology-Artificial Intelligence最新文献

筛选
英文 中文
Deep Learning Segmentation of Infiltrative and Enhancing Cellular Tumor at Pre- and Posttreatment Multishell Diffusion MRI of Glioblastoma. 胶质母细胞瘤治疗前和治疗后多壳体弥散 MRI 上浸润性和增强型细胞肿瘤的深度学习分割
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-09-01 DOI: 10.1148/ryai.230489
Louis Gagnon, Diviya Gupta, George Mastorakos, Nathan White, Vanessa Goodwill, Carrie R McDonald, Thomas Beaumont, Christopher Conlin, Tyler M Seibert, Uyen Nguyen, Jona Hattangadi-Gluth, Santosh Kesari, Jessica D Schulte, David Piccioni, Kathleen M Schmainda, Nikdokht Farid, Anders M Dale, Jeffrey D Rudie
{"title":"Deep Learning Segmentation of Infiltrative and Enhancing Cellular Tumor at Pre- and Posttreatment Multishell Diffusion MRI of Glioblastoma.","authors":"Louis Gagnon, Diviya Gupta, George Mastorakos, Nathan White, Vanessa Goodwill, Carrie R McDonald, Thomas Beaumont, Christopher Conlin, Tyler M Seibert, Uyen Nguyen, Jona Hattangadi-Gluth, Santosh Kesari, Jessica D Schulte, David Piccioni, Kathleen M Schmainda, Nikdokht Farid, Anders M Dale, Jeffrey D Rudie","doi":"10.1148/ryai.230489","DOIUrl":"10.1148/ryai.230489","url":null,"abstract":"<p><p>Purpose To develop and validate a deep learning (DL) method to detect and segment enhancing and nonenhancing cellular tumor on pre- and posttreatment MRI scans in patients with glioblastoma and to predict overall survival (OS) and progression-free survival (PFS). Materials and Methods This retrospective study included 1397 MRI scans in 1297 patients with glioblastoma, including an internal set of 243 MRI scans (January 2010 to June 2022) for model training and cross-validation and four external test cohorts. Cellular tumor maps were segmented by two radiologists on the basis of imaging, clinical history, and pathologic findings. Multimodal MRI data with perfusion and multishell diffusion imaging were inputted into a nnU-Net DL model to segment cellular tumor. Segmentation performance (Dice score) and performance in distinguishing recurrent tumor from posttreatment changes (area under the receiver operating characteristic curve [AUC]) were quantified. Model performance in predicting OS and PFS was assessed using Cox multivariable analysis. Results A cohort of 178 patients (mean age, 56 years ± 13 [SD]; 116 male, 62 female) with 243 MRI timepoints, as well as four external datasets with 55, 70, 610, and 419 MRI timepoints, respectively, were evaluated. The median Dice score was 0.79 (IQR, 0.53-0.89), and the AUC for detecting residual or recurrent tumor was 0.84 (95% CI: 0.79, 0.89). In the internal test set, estimated cellular tumor volume was significantly associated with OS (hazard ratio [HR] = 1.04 per milliliter; <i>P</i> < .001) and PFS (HR = 1.04 per milliliter; <i>P</i> < .001) after adjustment for age, sex, and gross total resection (GTR) status. In the external test sets, estimated cellular tumor volume was significantly associated with OS (HR = 1.01 per milliliter; <i>P</i> < .001) after adjustment for age, sex, and GTR status. Conclusion A DL model incorporating advanced imaging could accurately segment enhancing and nonenhancing cellular tumor, distinguish recurrent or residual tumor from posttreatment changes, and predict OS and PFS in patients with glioblastoma. <b>Keywords:</b> Segmentation, Glioblastoma, Multishell Diffusion MRI <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230489"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427928/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142018897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fluid Intelligence: AI's Role in Accurate Measurement of Ascites. 流体智能:人工智能在精确测量腹水中的作用。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-09-01 DOI: 10.1148/ryai.240377
Alex M Aisen, Pedro S Rodrigues
{"title":"Fluid Intelligence: AI's Role in Accurate Measurement of Ascites.","authors":"Alex M Aisen, Pedro S Rodrigues","doi":"10.1148/ryai.240377","DOIUrl":"10.1148/ryai.240377","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 5","pages":"e240377"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427919/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142018901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence Outcome Prediction in Neonates with Encephalopathy (AI-OPiNE). 新生儿脑病的人工智能结果预测(AI-OPINE)。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-09-01 DOI: 10.1148/ryai.240076
Christopher O Lew, Evan Calabrese, Joshua V Chen, Felicia Tang, Gunvant Chaudhari, Amanda Lee, John Faro, Sandra Juul, Amit Mathur, Robert C McKinstry, Jessica L Wisnowski, Andreas Rauschecker, Yvonne W Wu, Yi Li
{"title":"Artificial Intelligence Outcome Prediction in Neonates with Encephalopathy (AI-OPiNE).","authors":"Christopher O Lew, Evan Calabrese, Joshua V Chen, Felicia Tang, Gunvant Chaudhari, Amanda Lee, John Faro, Sandra Juul, Amit Mathur, Robert C McKinstry, Jessica L Wisnowski, Andreas Rauschecker, Yvonne W Wu, Yi Li","doi":"10.1148/ryai.240076","DOIUrl":"10.1148/ryai.240076","url":null,"abstract":"<p><p>Purpose To develop a deep learning algorithm to predict 2-year neurodevelopmental outcomes in neonates with hypoxic-ischemic encephalopathy using MRI and basic clinical data. Materials and Methods In this study, MRI data of term neonates with encephalopathy in the High-dose Erythropoietin for Asphyxia and Encephalopathy (HEAL) trial (ClinicalTrials.gov: NCT02811263), who were enrolled from 17 institutions between January 25, 2017, and October 9, 2019, were retrospectively analyzed. The harmonized MRI protocol included T1-weighted, T2-weighted, and diffusion tensor imaging. Deep learning classifiers were trained to predict the primary outcome of the HEAL trial (death or any neurodevelopmental impairment at 2 years) using multisequence MRI and basic clinical variables, including sex and gestational age at birth. Model performance was evaluated on test sets comprising 10% of cases from 15 institutions (in-distribution test set, <i>n</i> = 41) and 10% of cases from two institutions (out-of-distribution test set, <i>n</i> = 41). Model performance in predicting additional secondary outcomes, including death alone, was also assessed. Results For the 414 neonates (mean gestational age, 39 weeks ± 1.4 [SD]; 232 male, 182 female), in the study cohort, 198 (48%) died or had any neurodevelopmental impairment at 2 years. The deep learning model achieved an area under the receiver operating characteristic curve (AUC) of 0.74 (95% CI: 0.60, 0.86) and 63% accuracy in the in-distribution test set and an AUC of 0.77 (95% CI: 0.63, 0.90) and 78% accuracy in the out-of-distribution test set. Performance was similar or better for predicting secondary outcomes. Conclusion Deep learning analysis of neonatal brain MRI yielded high performance for predicting 2-year neurodevelopmental outcomes. <b>Keywords:</b> Convolutional Neural Network (CNN), Prognosis, Pediatrics, Brain, Brain Stem Clinical trial registration no. NCT02811263 <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also commentary by Rafful and Reis Teixeira in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240076"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427921/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141564665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Presurgical Upgrade Prediction of DCIS to Invasive Ductal Carcinoma Using Time-dependent Deep Learning Models with DCE MRI. 利用时间依赖性深度学习模型和 DCE MRI 对 DCIS 升级为浸润性导管癌进行手术前预测。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-09-01 DOI: 10.1148/ryai.230348
John D Mayfield, Dana Ataya, Mahmoud Abdalah, Olya Stringfield, Marilyn M Bui, Natarajan Raghunand, Bethany Niell, Issam El Naqa
{"title":"Presurgical Upgrade Prediction of DCIS to Invasive Ductal Carcinoma Using Time-dependent Deep Learning Models with DCE MRI.","authors":"John D Mayfield, Dana Ataya, Mahmoud Abdalah, Olya Stringfield, Marilyn M Bui, Natarajan Raghunand, Bethany Niell, Issam El Naqa","doi":"10.1148/ryai.230348","DOIUrl":"10.1148/ryai.230348","url":null,"abstract":"<p><p>Purpose To determine whether time-dependent deep learning models can outperform single time point models in predicting preoperative upgrade of ductal carcinoma in situ (DCIS) to invasive malignancy at dynamic contrast-enhanced (DCE) breast MRI without a lesion segmentation prerequisite. Materials and Methods In this exploratory study, 154 cases of biopsy-proven DCIS (25 upgraded at surgery and 129 not upgraded) were selected consecutively from a retrospective cohort of preoperative DCE MRI in women with a mean age of 59 years at time of diagnosis from 2012 to 2022. Binary classification was implemented with convolutional neural network (CNN)-long short-term memory (LSTM) architectures benchmarked against traditional CNNs without manual segmentation of the lesions. Combinatorial performance analysis of ResNet50 versus VGG16-based models was performed with each contrast phase. Binary classification area under the receiver operating characteristic curve (AUC) was reported. Results VGG16-based models consistently provided better holdout test AUCs than did ResNet50 in CNN and CNN-LSTM studies (multiphase test AUC, 0.67 vs 0.59, respectively, for CNN models [<i>P</i> = .04] and 0.73 vs 0.62 for CNN-LSTM models [<i>P</i> = .008]). The time-dependent model (CNN-LSTM) provided a better multiphase test AUC over single time point (CNN) models (0.73 vs 0.67; <i>P</i> = .04). Conclusion Compared with single time point architectures, sequential deep learning algorithms using preoperative DCE MRI improved prediction of DCIS lesions upgraded to invasive malignancy without the need for lesion segmentation. <b>Keywords:</b> MRI, Dynamic Contrast-enhanced, Breast, Convolutional Neural Network (CNN) <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230348"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427917/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141427769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing Pediatric Neuro-Oncology: Multi-institutional nnU-Net Segmentation of Medulloblastoma. 推进儿科神经肿瘤学:髓母细胞瘤的多机构 nnU-Net 分类。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-09-01 DOI: 10.1148/ryai.240517
Jeffrey D Rudie, Maria Correia de Verdier
{"title":"Advancing Pediatric Neuro-Oncology: Multi-institutional nnU-Net Segmentation of Medulloblastoma.","authors":"Jeffrey D Rudie, Maria Correia de Verdier","doi":"10.1148/ryai.240517","DOIUrl":"10.1148/ryai.240517","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 5","pages":"e240517"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427924/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of an Open-Source Large Language Model in Extracting Information from Free-Text Radiology Reports. 开源大语言模型从自由文本放射学报告中提取信息的性能。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-07-01 DOI: 10.1148/ryai.230364
Bastien Le Guellec, Alexandre Lefèvre, Charlotte Geay, Lucas Shorten, Cyril Bruge, Lotfi Hacein-Bey, Philippe Amouyel, Jean-Pierre Pruvo, Gregory Kuchcinski, Aghiles Hamroun
{"title":"Performance of an Open-Source Large Language Model in Extracting Information from Free-Text Radiology Reports.","authors":"Bastien Le Guellec, Alexandre Lefèvre, Charlotte Geay, Lucas Shorten, Cyril Bruge, Lotfi Hacein-Bey, Philippe Amouyel, Jean-Pierre Pruvo, Gregory Kuchcinski, Aghiles Hamroun","doi":"10.1148/ryai.230364","DOIUrl":"10.1148/ryai.230364","url":null,"abstract":"<p><p>Purpose To assess the performance of a local open-source large language model (LLM) in various information extraction tasks from real-life emergency brain MRI reports. Materials and Methods All consecutive emergency brain MRI reports written in 2022 from a French quaternary center were retrospectively reviewed. Two radiologists identified MRI scans that were performed in the emergency department for headaches. Four radiologists scored the reports' conclusions as either normal or abnormal. Abnormalities were labeled as either headache-causing or incidental. Vicuna (LMSYS Org), an open-source LLM, performed the same tasks. Vicuna's performance metrics were evaluated using the radiologists' consensus as the reference standard. Results Among the 2398 reports during the study period, radiologists identified 595 that included headaches in the indication (median age of patients, 35 years [IQR, 26-51 years]; 68% [403 of 595] women). A positive finding was reported in 227 of 595 (38%) cases, 136 of which could explain the headache. The LLM had a sensitivity of 98.0% (95% CI: 96.5, 99.0) and specificity of 99.3% (95% CI: 98.8, 99.7) for detecting the presence of headache in the clinical context, a sensitivity of 99.4% (95% CI: 98.3, 99.9) and specificity of 98.6% (95% CI: 92.2, 100.0) for the use of contrast medium injection, a sensitivity of 96.0% (95% CI: 92.5, 98.2) and specificity of 98.9% (95% CI: 97.2, 99.7) for study categorization as either normal or abnormal, and a sensitivity of 88.2% (95% CI: 81.6, 93.1) and specificity of 73% (95% CI: 62, 81) for causal inference between MRI findings and headache. Conclusion An open-source LLM was able to extract information from free-text radiology reports with excellent accuracy without requiring further training. <b>Keywords:</b> Large Language Model (LLM), Generative Pretrained Transformers (GPT), Open Source, Information Extraction, Report, Brain, MRI <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also the commentary by Akinci D'Antonoli and Bluethgen in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230364"},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294959/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning for Breast Cancer Risk Prediction: Application to a Large Representative UK Screening Cohort. 深度学习用于乳腺癌风险预测:应用于英国大型代表性筛查队列。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-07-01 DOI: 10.1148/ryai.230431
Sam Ellis, Sandra Gomes, Matthew Trumble, Mark D Halling-Brown, Kenneth C Young, Nouman S Chaudhry, Peter Harris, Lucy M Warren
{"title":"Deep Learning for Breast Cancer Risk Prediction: Application to a Large Representative UK Screening Cohort.","authors":"Sam Ellis, Sandra Gomes, Matthew Trumble, Mark D Halling-Brown, Kenneth C Young, Nouman S Chaudhry, Peter Harris, Lucy M Warren","doi":"10.1148/ryai.230431","DOIUrl":"10.1148/ryai.230431","url":null,"abstract":"<p><p>Purpose To develop an artificial intelligence (AI) deep learning tool capable of predicting future breast cancer risk from a current negative screening mammographic examination and to evaluate the model on data from the UK National Health Service Breast Screening Program. Materials and Methods The OPTIMAM Mammography Imaging Database contains screening data, including mammograms and information on interval cancers, for more than 300 000 female patients who attended screening at three different sites in the United Kingdom from 2012 onward. Cancer-free screening examinations from women aged 50-70 years were performed and classified as risk-positive or risk-negative based on the occurrence of cancer within 3 years of the original examination. Examinations with confirmed cancer and images containing implants were excluded. From the resulting 5264 risk-positive and 191 488 risk-negative examinations, training (<i>n</i> = 89 285), validation (<i>n</i> = 2106), and test (<i>n</i> = 39 351) datasets were produced for model development and evaluation. The AI model was trained to predict future cancer occurrence based on screening mammograms and patient age. Performance was evaluated on the test dataset using the area under the receiver operating characteristic curve (AUC) and compared across subpopulations to assess potential biases. Interpretability of the model was explored, including with saliency maps. Results On the hold-out test set, the AI model achieved an overall AUC of 0.70 (95% CI: 0.69, 0.72). There was no evidence of a difference in performance across the three sites, between patient ethnicities, or across age groups. Visualization of saliency maps and sample images provided insights into the mammographic features associated with AI-predicted cancer risk. Conclusion The developed AI tool showed good performance on a multisite, United Kingdom-specific dataset. <b>Keywords:</b> Deep Learning, Artificial Intelligence, Breast Cancer, Screening, Risk Prediction <i>Supplemental material is available for this article.</i> ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230431"},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294956/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141074674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Checklist for Artificial Intelligence in Medical Imaging (CLAIM): 2024 Update. 医学影像人工智能检查表(CLAIM):2024 年更新。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-07-01 DOI: 10.1148/ryai.240300
Ali S Tejani, Michail E Klontzas, Anthony A Gatti, John T Mongan, Linda Moy, Seong Ho Park, Charles E Kahn
{"title":"Checklist for Artificial Intelligence in Medical Imaging (CLAIM): 2024 Update.","authors":"Ali S Tejani, Michail E Klontzas, Anthony A Gatti, John T Mongan, Linda Moy, Seong Ho Park, Charles E Kahn","doi":"10.1148/ryai.240300","DOIUrl":"10.1148/ryai.240300","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240300"},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11304031/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141162489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-Stage Training Framework Using Multicontrast MRI Radiomics for IDH Mutation Status Prediction in Glioma. 利用多对比核磁共振成像放射组学预测胶质瘤中 IDH 突变状态的两阶段训练框架
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-07-01 DOI: 10.1148/ryai.230218
Nghi C D Truong, Chandan Ganesh Bangalore Yogananda, Benjamin C Wagner, James M Holcomb, Divya Reddy, Niloufar Saadat, Kimmo J Hatanpaa, Toral R Patel, Baowei Fei, Matthew D Lee, Rajan Jain, Richard J Bruce, Marco C Pinho, Ananth J Madhuranthakam, Joseph A Maldjian
{"title":"Two-Stage Training Framework Using Multicontrast MRI Radiomics for <i>IDH</i> Mutation Status Prediction in Glioma.","authors":"Nghi C D Truong, Chandan Ganesh Bangalore Yogananda, Benjamin C Wagner, James M Holcomb, Divya Reddy, Niloufar Saadat, Kimmo J Hatanpaa, Toral R Patel, Baowei Fei, Matthew D Lee, Rajan Jain, Richard J Bruce, Marco C Pinho, Ananth J Madhuranthakam, Joseph A Maldjian","doi":"10.1148/ryai.230218","DOIUrl":"10.1148/ryai.230218","url":null,"abstract":"<p><p>Purpose To develop a radiomics framework for preoperative MRI-based prediction of isocitrate dehydrogenase (<i>IDH</i>) mutation status, a crucial glioma prognostic indicator. Materials and Methods Radiomics features (shape, first-order statistics, and texture) were extracted from the whole tumor or the combination of nonenhancing, necrosis, and edema regions. Segmentation masks were obtained via the federated tumor segmentation tool or the original data source. Boruta, a wrapper-based feature selection algorithm, identified relevant features. Addressing the imbalance between mutated and wild-type cases, multiple prediction models were trained on balanced data subsets using random forest or XGBoost and assembled to build the final classifier. The framework was evaluated using retrospective MRI scans from three public datasets (The Cancer Imaging Archive [TCIA, 227 patients], the University of California San Francisco Preoperative Diffuse Glioma MRI dataset [UCSF, 495 patients], and the Erasmus Glioma Database [EGD, 456 patients]) and internal datasets collected from the University of Texas Southwestern Medical Center (UTSW, 356 patients), New York University (NYU, 136 patients), and University of Wisconsin-Madison (UWM, 174 patients). TCIA and UTSW served as separate training sets, while the remaining data constituted the test set (1617 or 1488 testing cases, respectively). Results The best performing models trained on the TCIA dataset achieved area under the receiver operating characteristic curve (AUC) values of 0.89 for UTSW, 0.86 for NYU, 0.93 for UWM, 0.94 for UCSF, and 0.88 for EGD test sets. The best performing models trained on the UTSW dataset achieved slightly higher AUCs: 0.92 for TCIA, 0.88 for NYU, 0.96 for UWM, 0.93 for UCSF, and 0.90 for EGD. Conclusion This MRI radiomics-based framework shows promise for accurate preoperative prediction of <i>IDH</i> mutation status in patients with glioma. <b>Keywords:</b> Glioma, Isocitrate Dehydrogenase Mutation, <i>IDH</i> Mutation, Radiomics, MRI <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also commentary by Moassefi and Erickson in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230218"},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294953/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141074538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating Sex-specific Differences in Abdominal Fat Volume and Proton Density Fat Fraction at MRI Using Automated nnU-Net-based Segmentation. 利用基于 nnU-Net 的自动分割技术评估磁共振成像扫描中腹部脂肪量和质子密度脂肪率的性别差异
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-07-01 DOI: 10.1148/ryai.230471
Arun Somasundaram, Mingming Wu, Anna Reik, Selina Rupp, Jessie Han, Stella Naebauer, Daniela Junker, Lisa Patzelt, Meike Wiechert, Yu Zhao, Daniel Rueckert, Hans Hauner, Christina Holzapfel, Dimitrios C Karampinos
{"title":"Evaluating Sex-specific Differences in Abdominal Fat Volume and Proton Density Fat Fraction at MRI Using Automated nnU-Net-based Segmentation.","authors":"Arun Somasundaram, Mingming Wu, Anna Reik, Selina Rupp, Jessie Han, Stella Naebauer, Daniela Junker, Lisa Patzelt, Meike Wiechert, Yu Zhao, Daniel Rueckert, Hans Hauner, Christina Holzapfel, Dimitrios C Karampinos","doi":"10.1148/ryai.230471","DOIUrl":"10.1148/ryai.230471","url":null,"abstract":"<p><p>Sex-specific abdominal organ volume and proton density fat fraction (PDFF) in people with obesity during a weight loss intervention was assessed with automated multiorgan segmentation of quantitative water-fat MRI. An nnU-Net architecture was employed for automatic segmentation of abdominal organs, including visceral and subcutaneous adipose tissue, liver, and psoas and erector spinae muscle, based on quantitative chemical shift-encoded MRI and using ground truth labels generated from participants of the Lifestyle Intervention (LION) study. Each organ's volume and fat content were examined in 127 participants (73 female and 54 male participants; body mass index, 30-39.9 kg/m<sup>2</sup>) and in 81 (54 female and 32 male participants) of these participants after an 8-week formula-based low-calorie diet. Dice scores ranging from 0.91 to 0.97 were achieved for the automatic segmentation. PDFF was found to be lower in visceral adipose tissue compared with subcutaneous adipose tissue in both male and female participants. Before intervention, female participants exhibited higher PDFF in subcutaneous adipose tissue (90.6% vs 89.7%; <i>P</i> < .001) and lower PDFF in liver (8.6% vs 13.3%; <i>P</i> < .001) and visceral adipose tissue (76.4% vs 81.3%; <i>P</i> < .001) compared with male participants. This relation persisted after intervention. As a response to caloric restriction, male participants lost significantly more visceral adipose tissue volume (1.76 L vs 0.91 L; <i>P</i> < .001) and showed a higher decrease in subcutaneous adipose tissue PDFF (2.7% vs 1.5%; <i>P</i> < .001) than female participants. Automated body composition analysis on quantitative water-fat MRI data provides new insights for understanding sex-specific metabolic response to caloric restriction and weight loss in people with obesity. <b>Keywords:</b> Obesity, Chemical Shift-encoded MRI, Abdominal Fat Volume, Proton Density Fat Fraction, nnU-Net ClinicalTrials.gov registration no. NCT04023942 <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230471"},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294970/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141162496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信