{"title":"SDATFuse: Sparse dual aggregation transformer-based network for infrared and visible image fusion","authors":"Jinshi Guo, Yang Li, Yutong Chen, Yu Ling","doi":"10.1016/j.dsp.2025.105200","DOIUrl":null,"url":null,"abstract":"<div><div>Infrared and visible image fusion aims to integrate complementary thermal radiation and detailed information to enhance scene understanding. Transformer architectures have shown promising performance in this field, but their feed-forward networks struggle to model multi-scale features, and self-attention often aggregates features using the similarities of all tokens in the queries and keys, which leads to irrelevant tokens introducing noise. To address these issues, this paper proposes a Sparse Dual Aggregation Transformer-based network for Infrared and Visible Image Fusion (SDATFuse). First, a hybrid multi-scale feed-forward network (HMSF) is introduced to effectively model multi-scale information and extract cross-modal features. Next, a sparse spatial self-attention mechanism is developed, using dynamic top-k selection operator to filter key self-attention values. By applying sparse spatial self-attention and channel self-attention in consecutive Transformer blocks, SDATFuse constructs a dual aggregation structure that efficiently integrates inter-block features. Additionally, a Dynamic Interaction Module (DIM) aggregates intra-block features across different self-attention dimensions. Finally, in the fusion stage, a Dual Selective Attention Module (DSAM) dynamically selects weights for global and local features from both modalities, utilizing spatial and channel self-attention maps. The proposed SDATFuse demonstrates superior performance on multiple infrared and visible image datasets. Experiments show that SDATFuse's fused results outperform state-of-the-art models in both qualitative and quantitative evaluations, effectively reducing noise and preserving detailed information.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"162 ","pages":"Article 105200"},"PeriodicalIF":2.9000,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1051200425002222","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Infrared and visible image fusion aims to integrate complementary thermal radiation and detailed information to enhance scene understanding. Transformer architectures have shown promising performance in this field, but their feed-forward networks struggle to model multi-scale features, and self-attention often aggregates features using the similarities of all tokens in the queries and keys, which leads to irrelevant tokens introducing noise. To address these issues, this paper proposes a Sparse Dual Aggregation Transformer-based network for Infrared and Visible Image Fusion (SDATFuse). First, a hybrid multi-scale feed-forward network (HMSF) is introduced to effectively model multi-scale information and extract cross-modal features. Next, a sparse spatial self-attention mechanism is developed, using dynamic top-k selection operator to filter key self-attention values. By applying sparse spatial self-attention and channel self-attention in consecutive Transformer blocks, SDATFuse constructs a dual aggregation structure that efficiently integrates inter-block features. Additionally, a Dynamic Interaction Module (DIM) aggregates intra-block features across different self-attention dimensions. Finally, in the fusion stage, a Dual Selective Attention Module (DSAM) dynamically selects weights for global and local features from both modalities, utilizing spatial and channel self-attention maps. The proposed SDATFuse demonstrates superior performance on multiple infrared and visible image datasets. Experiments show that SDATFuse's fused results outperform state-of-the-art models in both qualitative and quantitative evaluations, effectively reducing noise and preserving detailed information.
期刊介绍:
Digital Signal Processing: A Review Journal is one of the oldest and most established journals in the field of signal processing yet it aims to be the most innovative. The Journal invites top quality research articles at the frontiers of research in all aspects of signal processing. Our objective is to provide a platform for the publication of ground-breaking research in signal processing with both academic and industrial appeal.
The journal has a special emphasis on statistical signal processing methodology such as Bayesian signal processing, and encourages articles on emerging applications of signal processing such as:
• big data• machine learning• internet of things• information security• systems biology and computational biology,• financial time series analysis,• autonomous vehicles,• quantum computing,• neuromorphic engineering,• human-computer interaction and intelligent user interfaces,• environmental signal processing,• geophysical signal processing including seismic signal processing,• chemioinformatics and bioinformatics,• audio, visual and performance arts,• disaster management and prevention,• renewable energy,