Abdul Haseeb Nizamani , Zhigang Chen , Uzair Aslam Bhatti
{"title":"Deep-Fusion: A lightweight feature fusion model with Cross-Stream Attention and Attention Prediction Head for brain tumor diagnosis","authors":"Abdul Haseeb Nizamani , Zhigang Chen , Uzair Aslam Bhatti","doi":"10.1016/j.bspc.2025.108305","DOIUrl":null,"url":null,"abstract":"<div><div>The accurate and early detection of brain tumor types, such as gliomas, meningiomas, and pituitary tumors, is crucial for effective treatment planning and improving patient outcomes. However, advanced Computer-Aided Diagnosis (CAD) systems often face significant limitations in resource-constrained healthcare settings due to their high computational demands. State-of-the-art deep learning models often require substantial computational power and storage due to their complex architectures, large number of parameters, and model size which limits their practical applicability in such environments. To address this, we present Deep-Fusion, a novel lightweight model that maintains high accuracy while significantly reducing computational overhead, making it ideal for resource-constrained environments. Our proposed model leverages the strengths of two lightweight pre-trained models, MobileNetV2 and EfficientNetB0, integrated through the Feature Fusion Module (FFM), which is further enhanced by the Lightweight Feature Extraction Module (LEM), Cross-Stream Attention (CSA), and an Attention Prediction Head (APH). These components work together to optimize feature representation while preserving computational efficiency. We evaluated Deep-Fusion on two brain MRI datasets, Figshare and Br35H, achieving outstanding accuracies of 99.19% and 99.83%, respectively. Additionally, the model demonstrated exceptional performance in precision, recall, and F1-score metrics, recording 99.19%, 99.11%, and 99.15% on the Figshare dataset, and 99.83% across all metrics on the Br35H dataset. These findings establish Deep-Fusion as a reliable and efficient tool for medical image analysis, particularly in environments with limited computational resources.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"111 ","pages":"Article 108305"},"PeriodicalIF":4.9000,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S174680942500816X","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
The accurate and early detection of brain tumor types, such as gliomas, meningiomas, and pituitary tumors, is crucial for effective treatment planning and improving patient outcomes. However, advanced Computer-Aided Diagnosis (CAD) systems often face significant limitations in resource-constrained healthcare settings due to their high computational demands. State-of-the-art deep learning models often require substantial computational power and storage due to their complex architectures, large number of parameters, and model size which limits their practical applicability in such environments. To address this, we present Deep-Fusion, a novel lightweight model that maintains high accuracy while significantly reducing computational overhead, making it ideal for resource-constrained environments. Our proposed model leverages the strengths of two lightweight pre-trained models, MobileNetV2 and EfficientNetB0, integrated through the Feature Fusion Module (FFM), which is further enhanced by the Lightweight Feature Extraction Module (LEM), Cross-Stream Attention (CSA), and an Attention Prediction Head (APH). These components work together to optimize feature representation while preserving computational efficiency. We evaluated Deep-Fusion on two brain MRI datasets, Figshare and Br35H, achieving outstanding accuracies of 99.19% and 99.83%, respectively. Additionally, the model demonstrated exceptional performance in precision, recall, and F1-score metrics, recording 99.19%, 99.11%, and 99.15% on the Figshare dataset, and 99.83% across all metrics on the Br35H dataset. These findings establish Deep-Fusion as a reliable and efficient tool for medical image analysis, particularly in environments with limited computational resources.
期刊介绍:
Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management.
Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.