SDATFuse: Sparse dual aggregation transformer-based network for infrared and visible image fusion

IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Jinshi Guo, Yang Li, Yutong Chen, Yu Ling
{"title":"SDATFuse: Sparse dual aggregation transformer-based network for infrared and visible image fusion","authors":"Jinshi Guo,&nbsp;Yang Li,&nbsp;Yutong Chen,&nbsp;Yu Ling","doi":"10.1016/j.dsp.2025.105200","DOIUrl":null,"url":null,"abstract":"<div><div>Infrared and visible image fusion aims to integrate complementary thermal radiation and detailed information to enhance scene understanding. Transformer architectures have shown promising performance in this field, but their feed-forward networks struggle to model multi-scale features, and self-attention often aggregates features using the similarities of all tokens in the queries and keys, which leads to irrelevant tokens introducing noise. To address these issues, this paper proposes a Sparse Dual Aggregation Transformer-based network for Infrared and Visible Image Fusion (SDATFuse). First, a hybrid multi-scale feed-forward network (HMSF) is introduced to effectively model multi-scale information and extract cross-modal features. Next, a sparse spatial self-attention mechanism is developed, using dynamic top-k selection operator to filter key self-attention values. By applying sparse spatial self-attention and channel self-attention in consecutive Transformer blocks, SDATFuse constructs a dual aggregation structure that efficiently integrates inter-block features. Additionally, a Dynamic Interaction Module (DIM) aggregates intra-block features across different self-attention dimensions. Finally, in the fusion stage, a Dual Selective Attention Module (DSAM) dynamically selects weights for global and local features from both modalities, utilizing spatial and channel self-attention maps. The proposed SDATFuse demonstrates superior performance on multiple infrared and visible image datasets. Experiments show that SDATFuse's fused results outperform state-of-the-art models in both qualitative and quantitative evaluations, effectively reducing noise and preserving detailed information.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"162 ","pages":"Article 105200"},"PeriodicalIF":2.9000,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1051200425002222","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Infrared and visible image fusion aims to integrate complementary thermal radiation and detailed information to enhance scene understanding. Transformer architectures have shown promising performance in this field, but their feed-forward networks struggle to model multi-scale features, and self-attention often aggregates features using the similarities of all tokens in the queries and keys, which leads to irrelevant tokens introducing noise. To address these issues, this paper proposes a Sparse Dual Aggregation Transformer-based network for Infrared and Visible Image Fusion (SDATFuse). First, a hybrid multi-scale feed-forward network (HMSF) is introduced to effectively model multi-scale information and extract cross-modal features. Next, a sparse spatial self-attention mechanism is developed, using dynamic top-k selection operator to filter key self-attention values. By applying sparse spatial self-attention and channel self-attention in consecutive Transformer blocks, SDATFuse constructs a dual aggregation structure that efficiently integrates inter-block features. Additionally, a Dynamic Interaction Module (DIM) aggregates intra-block features across different self-attention dimensions. Finally, in the fusion stage, a Dual Selective Attention Module (DSAM) dynamically selects weights for global and local features from both modalities, utilizing spatial and channel self-attention maps. The proposed SDATFuse demonstrates superior performance on multiple infrared and visible image datasets. Experiments show that SDATFuse's fused results outperform state-of-the-art models in both qualitative and quantitative evaluations, effectively reducing noise and preserving detailed information.
求助全文
约1分钟内获得全文 求助全文
来源期刊
Digital Signal Processing
Digital Signal Processing 工程技术-工程:电子与电气
CiteScore
5.30
自引率
17.20%
发文量
435
审稿时长
66 days
期刊介绍: Digital Signal Processing: A Review Journal is one of the oldest and most established journals in the field of signal processing yet it aims to be the most innovative. The Journal invites top quality research articles at the frontiers of research in all aspects of signal processing. Our objective is to provide a platform for the publication of ground-breaking research in signal processing with both academic and industrial appeal. The journal has a special emphasis on statistical signal processing methodology such as Bayesian signal processing, and encourages articles on emerging applications of signal processing such as: • big data• machine learning• internet of things• information security• systems biology and computational biology,• financial time series analysis,• autonomous vehicles,• quantum computing,• neuromorphic engineering,• human-computer interaction and intelligent user interfaces,• environmental signal processing,• geophysical signal processing including seismic signal processing,• chemioinformatics and bioinformatics,• audio, visual and performance arts,• disaster management and prevention,• renewable energy,
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信