Deep-Fusion: A lightweight feature fusion model with Cross-Stream Attention and Attention Prediction Head for brain tumor diagnosis

IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL
Abdul Haseeb Nizamani , Zhigang Chen , Uzair Aslam Bhatti
{"title":"Deep-Fusion: A lightweight feature fusion model with Cross-Stream Attention and Attention Prediction Head for brain tumor diagnosis","authors":"Abdul Haseeb Nizamani ,&nbsp;Zhigang Chen ,&nbsp;Uzair Aslam Bhatti","doi":"10.1016/j.bspc.2025.108305","DOIUrl":null,"url":null,"abstract":"<div><div>The accurate and early detection of brain tumor types, such as gliomas, meningiomas, and pituitary tumors, is crucial for effective treatment planning and improving patient outcomes. However, advanced Computer-Aided Diagnosis (CAD) systems often face significant limitations in resource-constrained healthcare settings due to their high computational demands. State-of-the-art deep learning models often require substantial computational power and storage due to their complex architectures, large number of parameters, and model size which limits their practical applicability in such environments. To address this, we present Deep-Fusion, a novel lightweight model that maintains high accuracy while significantly reducing computational overhead, making it ideal for resource-constrained environments. Our proposed model leverages the strengths of two lightweight pre-trained models, MobileNetV2 and EfficientNetB0, integrated through the Feature Fusion Module (FFM), which is further enhanced by the Lightweight Feature Extraction Module (LEM), Cross-Stream Attention (CSA), and an Attention Prediction Head (APH). These components work together to optimize feature representation while preserving computational efficiency. We evaluated Deep-Fusion on two brain MRI datasets, Figshare and Br35H, achieving outstanding accuracies of 99.19% and 99.83%, respectively. Additionally, the model demonstrated exceptional performance in precision, recall, and F1-score metrics, recording 99.19%, 99.11%, and 99.15% on the Figshare dataset, and 99.83% across all metrics on the Br35H dataset. These findings establish Deep-Fusion as a reliable and efficient tool for medical image analysis, particularly in environments with limited computational resources.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"111 ","pages":"Article 108305"},"PeriodicalIF":4.9000,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S174680942500816X","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

The accurate and early detection of brain tumor types, such as gliomas, meningiomas, and pituitary tumors, is crucial for effective treatment planning and improving patient outcomes. However, advanced Computer-Aided Diagnosis (CAD) systems often face significant limitations in resource-constrained healthcare settings due to their high computational demands. State-of-the-art deep learning models often require substantial computational power and storage due to their complex architectures, large number of parameters, and model size which limits their practical applicability in such environments. To address this, we present Deep-Fusion, a novel lightweight model that maintains high accuracy while significantly reducing computational overhead, making it ideal for resource-constrained environments. Our proposed model leverages the strengths of two lightweight pre-trained models, MobileNetV2 and EfficientNetB0, integrated through the Feature Fusion Module (FFM), which is further enhanced by the Lightweight Feature Extraction Module (LEM), Cross-Stream Attention (CSA), and an Attention Prediction Head (APH). These components work together to optimize feature representation while preserving computational efficiency. We evaluated Deep-Fusion on two brain MRI datasets, Figshare and Br35H, achieving outstanding accuracies of 99.19% and 99.83%, respectively. Additionally, the model demonstrated exceptional performance in precision, recall, and F1-score metrics, recording 99.19%, 99.11%, and 99.15% on the Figshare dataset, and 99.83% across all metrics on the Br35H dataset. These findings establish Deep-Fusion as a reliable and efficient tool for medical image analysis, particularly in environments with limited computational resources.
深度融合:一种具有横流注意和注意预测头的轻量级特征融合模型用于脑肿瘤诊断
准确和早期发现脑肿瘤类型,如胶质瘤、脑膜瘤和垂体瘤,对于有效的治疗计划和改善患者预后至关重要。然而,由于高计算需求,高级计算机辅助诊断(CAD)系统在资源受限的医疗保健环境中经常面临重大限制。最先进的深度学习模型通常需要大量的计算能力和存储,因为它们的复杂架构、大量参数和模型大小限制了它们在这种环境中的实际适用性。为了解决这个问题,我们提出了一种新的轻量级模型Deep-Fusion,它在保持高精度的同时显著降低了计算开销,使其成为资源受限环境的理想选择。我们提出的模型利用了两个轻量级预训练模型MobileNetV2和EfficientNetB0的优势,通过特征融合模块(FFM)集成,并通过轻量级特征提取模块(LEM)、横流注意力(CSA)和注意力预测头(APH)进一步增强。这些组件一起工作以优化特征表示,同时保持计算效率。我们在Figshare和Br35H两个脑MRI数据集上对Deep-Fusion进行了评估,准确率分别达到了99.19%和99.83%。此外,该模型在精度、召回率和f1得分指标上表现出色,在Figshare数据集上的记录为99.19%、99.11%和99.15%,在Br35H数据集上的所有指标上的记录为99.83%。这些发现确立了Deep-Fusion作为医学图像分析的可靠和有效的工具,特别是在计算资源有限的环境中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Biomedical Signal Processing and Control
Biomedical Signal Processing and Control 工程技术-工程:生物医学
CiteScore
9.80
自引率
13.70%
发文量
822
审稿时长
4 months
期刊介绍: Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management. Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信