{"title":"Comparative analysis of transformer-based deep learning models for glioma and meningioma classification","authors":"Katerina Nalentzi , Konstantinos Gerogiannis , Haralabos Bougias , Nikolaos Stogiannos , Periklis Papavasileiou","doi":"10.1016/j.jmir.2025.102008","DOIUrl":null,"url":null,"abstract":"<div><h3>Introduction/Background</h3><div>This study compares the classification accuracy of novel transformer-based deep learning models (ViT and BEiT) on brain MRIs of gliomas and meningiomas through a feature-driven approach. Meta’s Segment Anything Model was used for semi-automatic segmentation, therefore proposing a total neural network-based workflow for this classification task.</div></div><div><h3>Methods</h3><div>ViT and BEiT models were finetuned to a publicly available brain MRI dataset. Gliomas/meningiomas cases (625/507) were used for training and 520 cases (260/260; gliomas/meningiomas) for testing. The extracted deep radiomic features from ViT and BEiT underwent normalization, dimensionality reduction based on the Pearson correlation coefficient (PCC), and feature selection using analysis of variance (ANOVA). A multi-layer perceptron (MLP) with 1 hidden layer, 100 units, rectified linear unit activation, and Adam optimizer was utilized. Hyperparameter tuning was performed via 5-fold cross-validation.</div></div><div><h3>Results</h3><div>The ViT model achieved the highest AUC on the validation dataset using 7 features, yielding an AUC of 0.985 and accuracy of 0.952. On the independent testing dataset, the model exhibited an AUC of 0.962 and an accuracy of 0.904. The BEiT model yielded an AUC of 0.939 and an accuracy of 0.871 on the testing dataset.</div></div><div><h3>Discussion</h3><div>This study demonstrates the effectiveness of transformer-based models, especially ViT, for glioma and meningioma classification, achieving high AUC scores and accuracy. However, the study is limited by the use of a single dataset, which may affect generalizability. Future work should focus on expanding datasets and further optimizing models to improve performance and applicability across different institutions.</div></div><div><h3>Conclusion</h3><div>This study introduces a feature-driven methodology for glioma and meningioma classification, showcasing advancements in the accuracy and model robustness of transformer-based models.</div></div>","PeriodicalId":46420,"journal":{"name":"Journal of Medical Imaging and Radiation Sciences","volume":"56 5","pages":"Article 102008"},"PeriodicalIF":1.3000,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Imaging and Radiation Sciences","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1939865425001572","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction/Background
This study compares the classification accuracy of novel transformer-based deep learning models (ViT and BEiT) on brain MRIs of gliomas and meningiomas through a feature-driven approach. Meta’s Segment Anything Model was used for semi-automatic segmentation, therefore proposing a total neural network-based workflow for this classification task.
Methods
ViT and BEiT models were finetuned to a publicly available brain MRI dataset. Gliomas/meningiomas cases (625/507) were used for training and 520 cases (260/260; gliomas/meningiomas) for testing. The extracted deep radiomic features from ViT and BEiT underwent normalization, dimensionality reduction based on the Pearson correlation coefficient (PCC), and feature selection using analysis of variance (ANOVA). A multi-layer perceptron (MLP) with 1 hidden layer, 100 units, rectified linear unit activation, and Adam optimizer was utilized. Hyperparameter tuning was performed via 5-fold cross-validation.
Results
The ViT model achieved the highest AUC on the validation dataset using 7 features, yielding an AUC of 0.985 and accuracy of 0.952. On the independent testing dataset, the model exhibited an AUC of 0.962 and an accuracy of 0.904. The BEiT model yielded an AUC of 0.939 and an accuracy of 0.871 on the testing dataset.
Discussion
This study demonstrates the effectiveness of transformer-based models, especially ViT, for glioma and meningioma classification, achieving high AUC scores and accuracy. However, the study is limited by the use of a single dataset, which may affect generalizability. Future work should focus on expanding datasets and further optimizing models to improve performance and applicability across different institutions.
Conclusion
This study introduces a feature-driven methodology for glioma and meningioma classification, showcasing advancements in the accuracy and model robustness of transformer-based models.
期刊介绍:
Journal of Medical Imaging and Radiation Sciences is the official peer-reviewed journal of the Canadian Association of Medical Radiation Technologists. This journal is published four times a year and is circulated to approximately 11,000 medical radiation technologists, libraries and radiology departments throughout Canada, the United States and overseas. The Journal publishes articles on recent research, new technology and techniques, professional practices, technologists viewpoints as well as relevant book reviews.