Mohamed J Saadh, Rafid Jihad Albadr, Dharmesh Sur, Anupam Yadav, R Roopashree, Gargi Sangwan, T Krithiga, Zafar Aminov, Waam Mohammed Taher, Mariem Alwan, Mahmood Jasem Jawad, Ali M Ali Al-Nuaimi, Bagher Farhood
{"title":"Reproducible meningioma grading across multi-center MRI protocols via hybrid radiomic and deep learning features.","authors":"Mohamed J Saadh, Rafid Jihad Albadr, Dharmesh Sur, Anupam Yadav, R Roopashree, Gargi Sangwan, T Krithiga, Zafar Aminov, Waam Mohammed Taher, Mariem Alwan, Mahmood Jasem Jawad, Ali M Ali Al-Nuaimi, Bagher Farhood","doi":"10.1007/s00234-025-03725-8","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>This study aimed to create a reliable method for preoperative grading of meningiomas by combining radiomic features and deep learning-based features extracted using a 3D autoencoder. The goal was to utilize the strengths of both handcrafted radiomic features and deep learning features to improve accuracy and reproducibility across different MRI protocols.</p><p><strong>Materials and methods: </strong>The study included 3,523 patients with histologically confirmed meningiomas, consisting of 1,900 low-grade (Grade I) and 1,623 high-grade (Grades II and III) cases. Radiomic features were extracted from T1-contrast-enhanced and T2-weighted MRI scans using the Standardized Environment for Radiomics Analysis (SERA). Deep learning features were obtained from the bottleneck layer of a 3D autoencoder integrated with attention mechanisms. Feature selection was performed using Principal Component Analysis (PCA) and Analysis of Variance (ANOVA). Classification was done using machine learning models like XGBoost, CatBoost, and stacking ensembles. Reproducibility was evaluated using the Intraclass Correlation Coefficient (ICC), and batch effects were harmonized with the ComBat method. Performance was assessed based on accuracy, sensitivity, and the area under the receiver operating characteristic curve (AUC).</p><p><strong>Results: </strong>For T1-contrast-enhanced images, combining radiomic and deep learning features provided the highest AUC of 95.85% and accuracy of 95.18%, outperforming models using either feature type alone. T2-weighted images showed slightly lower performance, with the best AUC of 94.12% and accuracy of 93.14%. Deep learning features performed better than radiomic features alone, demonstrating their strength in capturing complex spatial patterns. The end-to-end 3D autoencoder with T1-contrast images achieved an AUC of 92.15%, accuracy of 91.14%, and sensitivity of 92.48%, surpassing T2-weighted imaging models. Reproducibility analysis showed high reliability (ICC > 0.75) for 127 out of 215 features, ensuring consistent performance across multi-center datasets.</p><p><strong>Conclusions: </strong>The proposed framework effectively integrates radiomic and deep learning features to provide a robust, non-invasive, and reproducible approach for meningioma grading. Future research should validate this framework in real-world clinical settings and explore adding clinical parameters to enhance its prognostic value.</p>","PeriodicalId":19422,"journal":{"name":"Neuroradiology","volume":" ","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuroradiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s00234-025-03725-8","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Objective: This study aimed to create a reliable method for preoperative grading of meningiomas by combining radiomic features and deep learning-based features extracted using a 3D autoencoder. The goal was to utilize the strengths of both handcrafted radiomic features and deep learning features to improve accuracy and reproducibility across different MRI protocols.
Materials and methods: The study included 3,523 patients with histologically confirmed meningiomas, consisting of 1,900 low-grade (Grade I) and 1,623 high-grade (Grades II and III) cases. Radiomic features were extracted from T1-contrast-enhanced and T2-weighted MRI scans using the Standardized Environment for Radiomics Analysis (SERA). Deep learning features were obtained from the bottleneck layer of a 3D autoencoder integrated with attention mechanisms. Feature selection was performed using Principal Component Analysis (PCA) and Analysis of Variance (ANOVA). Classification was done using machine learning models like XGBoost, CatBoost, and stacking ensembles. Reproducibility was evaluated using the Intraclass Correlation Coefficient (ICC), and batch effects were harmonized with the ComBat method. Performance was assessed based on accuracy, sensitivity, and the area under the receiver operating characteristic curve (AUC).
Results: For T1-contrast-enhanced images, combining radiomic and deep learning features provided the highest AUC of 95.85% and accuracy of 95.18%, outperforming models using either feature type alone. T2-weighted images showed slightly lower performance, with the best AUC of 94.12% and accuracy of 93.14%. Deep learning features performed better than radiomic features alone, demonstrating their strength in capturing complex spatial patterns. The end-to-end 3D autoencoder with T1-contrast images achieved an AUC of 92.15%, accuracy of 91.14%, and sensitivity of 92.48%, surpassing T2-weighted imaging models. Reproducibility analysis showed high reliability (ICC > 0.75) for 127 out of 215 features, ensuring consistent performance across multi-center datasets.
Conclusions: The proposed framework effectively integrates radiomic and deep learning features to provide a robust, non-invasive, and reproducible approach for meningioma grading. Future research should validate this framework in real-world clinical settings and explore adding clinical parameters to enhance its prognostic value.
期刊介绍:
Neuroradiology aims to provide state-of-the-art medical and scientific information in the fields of Neuroradiology, Neurosciences, Neurology, Psychiatry, Neurosurgery, and related medical specialities. Neuroradiology as the official Journal of the European Society of Neuroradiology receives submissions from all parts of the world and publishes peer-reviewed original research, comprehensive reviews, educational papers, opinion papers, and short reports on exceptional clinical observations and new technical developments in the field of Neuroimaging and Neurointervention. The journal has subsections for Diagnostic and Interventional Neuroradiology, Advanced Neuroimaging, Paediatric Neuroradiology, Head-Neck-ENT Radiology, Spine Neuroradiology, and for submissions from Japan. Neuroradiology aims to provide new knowledge about and insights into the function and pathology of the human nervous system that may help to better diagnose and treat nervous system diseases. Neuroradiology is a member of the Committee on Publication Ethics (COPE) and follows the COPE core practices. Neuroradiology prefers articles that are free of bias, self-critical regarding limitations, transparent and clear in describing study participants, methods, and statistics, and short in presenting results. Before peer-review all submissions are automatically checked by iThenticate to assess for potential overlap in prior publication.