Sahithi Kantu, Hema Sai Kaja, Vaishnavi Kukkala, Salah A Aly, Khaled Sayed
{"title":"Integrating MobileNetV3 and SqueezeNet for Multi-class Brain Tumor Classification.","authors":"Sahithi Kantu, Hema Sai Kaja, Vaishnavi Kukkala, Salah A Aly, Khaled Sayed","doi":"10.1007/s10278-025-01589-1","DOIUrl":null,"url":null,"abstract":"<p><p>Brain tumors pose a critical health threat requiring timely and accurate classification for effective treatment. Traditional MRI analysis is labor-intensive and prone to variability, necessitating reliable automated solutions. This study explores lightweight deep learning models for multi-class brain tumor classification across four categories: glioma, meningioma, pituitary tumors, and no tumor. We investigate the performance of MobileNetV3 and SqueezeNet individually, and a feature-fusion hybrid model that combines their embedding layers. We utilized a publicly available MRI dataset containing 7023 images with a consistent internal split (65% training, 17% validation, 18% test) to ensure reliable evaluation. MobileNetV3 offers deep semantic understanding through its expressive features, while SqueezeNet provides minimal computational overhead. Their feature-level integration creates a balanced approach between diagnostic accuracy and deployment efficiency. Experiments conducted with consistent hyperparameters and preprocessing showed MobileNetV3 achieved the highest test accuracy (99.31%) while maintaining a low parameter count (3.47M), making it suitable for real-world deployment. Grad-CAM visualizations were employed for model explainability, highlighting tumor-relevant regions and helping visualize the specific areas contributing to predictions. Our proposed models outperform several baseline architectures like VGG16 and InceptionV3, achieving high accuracy with significantly fewer parameters. These results demonstrate that well-optimized lightweight networks can deliver accurate and interpretable brain tumor classification.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of imaging informatics in medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10278-025-01589-1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Brain tumors pose a critical health threat requiring timely and accurate classification for effective treatment. Traditional MRI analysis is labor-intensive and prone to variability, necessitating reliable automated solutions. This study explores lightweight deep learning models for multi-class brain tumor classification across four categories: glioma, meningioma, pituitary tumors, and no tumor. We investigate the performance of MobileNetV3 and SqueezeNet individually, and a feature-fusion hybrid model that combines their embedding layers. We utilized a publicly available MRI dataset containing 7023 images with a consistent internal split (65% training, 17% validation, 18% test) to ensure reliable evaluation. MobileNetV3 offers deep semantic understanding through its expressive features, while SqueezeNet provides minimal computational overhead. Their feature-level integration creates a balanced approach between diagnostic accuracy and deployment efficiency. Experiments conducted with consistent hyperparameters and preprocessing showed MobileNetV3 achieved the highest test accuracy (99.31%) while maintaining a low parameter count (3.47M), making it suitable for real-world deployment. Grad-CAM visualizations were employed for model explainability, highlighting tumor-relevant regions and helping visualize the specific areas contributing to predictions. Our proposed models outperform several baseline architectures like VGG16 and InceptionV3, achieving high accuracy with significantly fewer parameters. These results demonstrate that well-optimized lightweight networks can deliver accurate and interpretable brain tumor classification.