Reproducible meningioma grading across multi-center MRI protocols via hybrid radiomic and deep learning features.

IF 2.6 3区 医学 Q2 CLINICAL NEUROLOGY
Mohamed J Saadh, Rafid Jihad Albadr, Dharmesh Sur, Anupam Yadav, R Roopashree, Gargi Sangwan, T Krithiga, Zafar Aminov, Waam Mohammed Taher, Mariem Alwan, Mahmood Jasem Jawad, Ali M Ali Al-Nuaimi, Bagher Farhood
{"title":"Reproducible meningioma grading across multi-center MRI protocols via hybrid radiomic and deep learning features.","authors":"Mohamed J Saadh, Rafid Jihad Albadr, Dharmesh Sur, Anupam Yadav, R Roopashree, Gargi Sangwan, T Krithiga, Zafar Aminov, Waam Mohammed Taher, Mariem Alwan, Mahmood Jasem Jawad, Ali M Ali Al-Nuaimi, Bagher Farhood","doi":"10.1007/s00234-025-03725-8","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>This study aimed to create a reliable method for preoperative grading of meningiomas by combining radiomic features and deep learning-based features extracted using a 3D autoencoder. The goal was to utilize the strengths of both handcrafted radiomic features and deep learning features to improve accuracy and reproducibility across different MRI protocols.</p><p><strong>Materials and methods: </strong>The study included 3,523 patients with histologically confirmed meningiomas, consisting of 1,900 low-grade (Grade I) and 1,623 high-grade (Grades II and III) cases. Radiomic features were extracted from T1-contrast-enhanced and T2-weighted MRI scans using the Standardized Environment for Radiomics Analysis (SERA). Deep learning features were obtained from the bottleneck layer of a 3D autoencoder integrated with attention mechanisms. Feature selection was performed using Principal Component Analysis (PCA) and Analysis of Variance (ANOVA). Classification was done using machine learning models like XGBoost, CatBoost, and stacking ensembles. Reproducibility was evaluated using the Intraclass Correlation Coefficient (ICC), and batch effects were harmonized with the ComBat method. Performance was assessed based on accuracy, sensitivity, and the area under the receiver operating characteristic curve (AUC).</p><p><strong>Results: </strong>For T1-contrast-enhanced images, combining radiomic and deep learning features provided the highest AUC of 95.85% and accuracy of 95.18%, outperforming models using either feature type alone. T2-weighted images showed slightly lower performance, with the best AUC of 94.12% and accuracy of 93.14%. Deep learning features performed better than radiomic features alone, demonstrating their strength in capturing complex spatial patterns. The end-to-end 3D autoencoder with T1-contrast images achieved an AUC of 92.15%, accuracy of 91.14%, and sensitivity of 92.48%, surpassing T2-weighted imaging models. Reproducibility analysis showed high reliability (ICC > 0.75) for 127 out of 215 features, ensuring consistent performance across multi-center datasets.</p><p><strong>Conclusions: </strong>The proposed framework effectively integrates radiomic and deep learning features to provide a robust, non-invasive, and reproducible approach for meningioma grading. Future research should validate this framework in real-world clinical settings and explore adding clinical parameters to enhance its prognostic value.</p>","PeriodicalId":19422,"journal":{"name":"Neuroradiology","volume":" ","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuroradiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s00234-025-03725-8","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: This study aimed to create a reliable method for preoperative grading of meningiomas by combining radiomic features and deep learning-based features extracted using a 3D autoencoder. The goal was to utilize the strengths of both handcrafted radiomic features and deep learning features to improve accuracy and reproducibility across different MRI protocols.

Materials and methods: The study included 3,523 patients with histologically confirmed meningiomas, consisting of 1,900 low-grade (Grade I) and 1,623 high-grade (Grades II and III) cases. Radiomic features were extracted from T1-contrast-enhanced and T2-weighted MRI scans using the Standardized Environment for Radiomics Analysis (SERA). Deep learning features were obtained from the bottleneck layer of a 3D autoencoder integrated with attention mechanisms. Feature selection was performed using Principal Component Analysis (PCA) and Analysis of Variance (ANOVA). Classification was done using machine learning models like XGBoost, CatBoost, and stacking ensembles. Reproducibility was evaluated using the Intraclass Correlation Coefficient (ICC), and batch effects were harmonized with the ComBat method. Performance was assessed based on accuracy, sensitivity, and the area under the receiver operating characteristic curve (AUC).

Results: For T1-contrast-enhanced images, combining radiomic and deep learning features provided the highest AUC of 95.85% and accuracy of 95.18%, outperforming models using either feature type alone. T2-weighted images showed slightly lower performance, with the best AUC of 94.12% and accuracy of 93.14%. Deep learning features performed better than radiomic features alone, demonstrating their strength in capturing complex spatial patterns. The end-to-end 3D autoencoder with T1-contrast images achieved an AUC of 92.15%, accuracy of 91.14%, and sensitivity of 92.48%, surpassing T2-weighted imaging models. Reproducibility analysis showed high reliability (ICC > 0.75) for 127 out of 215 features, ensuring consistent performance across multi-center datasets.

Conclusions: The proposed framework effectively integrates radiomic and deep learning features to provide a robust, non-invasive, and reproducible approach for meningioma grading. Future research should validate this framework in real-world clinical settings and explore adding clinical parameters to enhance its prognostic value.

通过混合放射学和深度学习特征跨多中心MRI协议的可重复脑膜瘤分级。
目的:本研究旨在结合三维自编码器提取的放射学特征和基于深度学习的特征,建立一种可靠的脑膜瘤术前分级方法。目标是利用手工制作的放射学特征和深度学习特征的优势,提高不同MRI协议的准确性和可重复性。材料和方法:本研究纳入3523例组织学证实的脑膜瘤患者,其中1900例低级别(I级),1623例高级别(II级和III级)。使用放射组学分析标准化环境(SERA)从t1对比增强和t2加权MRI扫描中提取放射组学特征。从集成了注意机制的三维自编码器的瓶颈层获得深度学习特征。使用主成分分析(PCA)和方差分析(ANOVA)进行特征选择。分类使用机器学习模型,如XGBoost、CatBoost和堆叠集成。用类内相关系数(ICC)评价重现性,并用ComBat方法协调批效应。根据准确度、灵敏度和接收器工作特性曲线下面积(AUC)对性能进行评估。结果:对于t1增强图像,结合放射学和深度学习特征的AUC最高,为95.85%,准确率为95.18%,优于单独使用任何一种特征类型的模型。t2加权图像表现稍差,最佳AUC为94.12%,准确率为93.14%。深度学习特征比单独的放射学特征表现得更好,证明了它们在捕获复杂空间模式方面的优势。端到端3D自编码器在t1对比度图像上的AUC为92.15%,精度为91.14%,灵敏度为92.48%,优于t2加权成像模型。再现性分析显示215个特征中的127个具有高可靠性(ICC > 0.75),确保了跨多中心数据集的一致性能。结论:所提出的框架有效地整合了放射学和深度学习的特征,为脑膜瘤分级提供了一种强大的、非侵入性的、可重复的方法。未来的研究应该在现实世界的临床环境中验证这一框架,并探索增加临床参数以提高其预后价值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neuroradiology
Neuroradiology 医学-核医学
CiteScore
5.30
自引率
3.60%
发文量
214
审稿时长
4-8 weeks
期刊介绍: Neuroradiology aims to provide state-of-the-art medical and scientific information in the fields of Neuroradiology, Neurosciences, Neurology, Psychiatry, Neurosurgery, and related medical specialities. Neuroradiology as the official Journal of the European Society of Neuroradiology receives submissions from all parts of the world and publishes peer-reviewed original research, comprehensive reviews, educational papers, opinion papers, and short reports on exceptional clinical observations and new technical developments in the field of Neuroimaging and Neurointervention. The journal has subsections for Diagnostic and Interventional Neuroradiology, Advanced Neuroimaging, Paediatric Neuroradiology, Head-Neck-ENT Radiology, Spine Neuroradiology, and for submissions from Japan. Neuroradiology aims to provide new knowledge about and insights into the function and pathology of the human nervous system that may help to better diagnose and treat nervous system diseases. Neuroradiology is a member of the Committee on Publication Ethics (COPE) and follows the COPE core practices. Neuroradiology prefers articles that are free of bias, self-critical regarding limitations, transparent and clear in describing study participants, methods, and statistics, and short in presenting results. Before peer-review all submissions are automatically checked by iThenticate to assess for potential overlap in prior publication.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信