TCIGFusion: A two-stage correlated feature interactive guided network for infrared and visible image fusion

IF 3.7 2区 工程技术 Q2 OPTICS
Jiawei Liu, Guiling Sun, Bowen Zheng, Liang Dong
{"title":"TCIGFusion: A two-stage correlated feature interactive guided network for infrared and visible image fusion","authors":"Jiawei Liu,&nbsp;Guiling Sun,&nbsp;Bowen Zheng,&nbsp;Liang Dong","doi":"10.1016/j.optlaseng.2025.109265","DOIUrl":null,"url":null,"abstract":"<div><div>Infrared and visible image fusion is aimed at generating images with prominent targets and texture details, providing support for downstream applications such as object detection. However, most existing deep learning-based fusion methods involve single-stage training and manually designed fusion rules, which cannot effectively extract and fuse features. Therefore, in this paper, we propose a two-stage correlated feature interactive guided network termed TCIGFusion. In the first stage, a Unet-like dual-branch Transformer module and dynamic large kernel convolution block (DLKB) are used to extract global features from the two source images, while the convolution blocks extract local features from the source images. In the second phase, we designed a cross attention guide module (CAGM) to interactively fuse the heterogeneously related features of the two modalities, avoiding the complexity associated with manually designing fusion rules. Furthermore, to optimize the efficacy of the fusion network, we employ a combination of image reconstruction, decomposition, and gradient loss functions for unsupervised training of the model. The superiority of our TCIGFusion is evidenced by extensive experimentation conducted on multiple public datasets. These experiments demonstrate that our method outperforms other state-of-the-art deep learning approaches, as evaluated through both subjective and objective metrics.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"195 ","pages":"Article 109265"},"PeriodicalIF":3.7000,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics and Lasers in Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0143816625004506","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Infrared and visible image fusion is aimed at generating images with prominent targets and texture details, providing support for downstream applications such as object detection. However, most existing deep learning-based fusion methods involve single-stage training and manually designed fusion rules, which cannot effectively extract and fuse features. Therefore, in this paper, we propose a two-stage correlated feature interactive guided network termed TCIGFusion. In the first stage, a Unet-like dual-branch Transformer module and dynamic large kernel convolution block (DLKB) are used to extract global features from the two source images, while the convolution blocks extract local features from the source images. In the second phase, we designed a cross attention guide module (CAGM) to interactively fuse the heterogeneously related features of the two modalities, avoiding the complexity associated with manually designing fusion rules. Furthermore, to optimize the efficacy of the fusion network, we employ a combination of image reconstruction, decomposition, and gradient loss functions for unsupervised training of the model. The superiority of our TCIGFusion is evidenced by extensive experimentation conducted on multiple public datasets. These experiments demonstrate that our method outperforms other state-of-the-art deep learning approaches, as evaluated through both subjective and objective metrics.
TCIGFusion:一种用于红外和可见光图像融合的两阶段相关特征交互引导网络
红外和可见光图像融合旨在生成具有突出目标和纹理细节的图像,为目标检测等下游应用提供支持。然而,现有的基于深度学习的融合方法大多采用单阶段训练和人工设计的融合规则,无法有效地提取和融合特征。因此,本文提出了一种两阶段相关特征交互引导网络TCIGFusion。在第一阶段,使用类似unet的双分支Transformer模块和动态大核卷积块(DLKB)从两幅源图像中提取全局特征,而卷积块从源图像中提取局部特征。在第二阶段,我们设计了一个交叉注意引导模块(CAGM),以交互方式融合两种模式的异构相关特征,避免了人工设计融合规则的复杂性。此外,为了优化融合网络的有效性,我们采用图像重构、分解和梯度损失函数相结合的方法对模型进行无监督训练。在多个公共数据集上进行的大量实验证明了我们的TCIGFusion的优越性。这些实验表明,我们的方法优于其他最先进的深度学习方法,通过主观和客观指标进行评估。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Optics and Lasers in Engineering
Optics and Lasers in Engineering 工程技术-光学
CiteScore
8.90
自引率
8.70%
发文量
384
审稿时长
42 days
期刊介绍: Optics and Lasers in Engineering aims at providing an international forum for the interchange of information on the development of optical techniques and laser technology in engineering. Emphasis is placed on contributions targeted at the practical use of methods and devices, the development and enhancement of solutions and new theoretical concepts for experimental methods. Optics and Lasers in Engineering reflects the main areas in which optical methods are being used and developed for an engineering environment. Manuscripts should offer clear evidence of novelty and significance. Papers focusing on parameter optimization or computational issues are not suitable. Similarly, papers focussed on an application rather than the optical method fall outside the journal''s scope. The scope of the journal is defined to include the following: -Optical Metrology- Optical Methods for 3D visualization and virtual engineering- Optical Techniques for Microsystems- Imaging, Microscopy and Adaptive Optics- Computational Imaging- Laser methods in manufacturing- Integrated optical and photonic sensors- Optics and Photonics in Life Science- Hyperspectral and spectroscopic methods- Infrared and Terahertz techniques
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信