Shengchun Wang , Haowen Li , Lianye Liu , Ronghui Cai , Zhonghai Yin , Huijie Zhu
{"title":"tsfi融合:一种基于变压器和空频相互作用的双支路解耦红外和可见光图像融合网络","authors":"Shengchun Wang , Haowen Li , Lianye Liu , Ronghui Cai , Zhonghai Yin , Huijie Zhu","doi":"10.1016/j.optlaseng.2025.109287","DOIUrl":null,"url":null,"abstract":"<div><div>Infrared and visible image fusion (IVIF) aims to generate high-quality images by combining detailed textures from visible images with the target-highlight capabilities of infrared images. However, many existing methods struggle to capture both shared and unique features of each modality. They often focus only on spatial domain fusion, such as pixel averaging, while overlooking valuable frequency domain information. This makes it hard to retain fine details. To overcome these limitations, we propose TSFI-Fusion, a dual-branch network that combines Transformer-based global understanding with spatial-frequency detail enhancement. The two branches include a Transformer-based semantic construction branch for capturing global features and a detail enhancement branch utilizing an invertible neural network (INN) and a frequency domain compensation module (FDCM) to integrate spatial and frequency information. We also design a dual-domain interaction module (DDIM) to improve feature correlation across domains and a collaborative information integration module (CIIM) to effectively merge features from both branches. Additionally, we introduce a focal frequency loss to guide the model in learning important frequency information. Experimental results demonstrate that TSFI-Fusion outperforms existing methods across multiple datasets and metrics on the IVIF task. In downstream applications such as object detection, it effectively enhances performance. Furthermore, extended experiments on the MIF task reveal the robust generalization ability of the proposed mechanism across diverse fusion scenarios. Our code will be available at <span><span>https://github.com/lihaowen0109/TSFI-Fusion</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"195 ","pages":"Article 109287"},"PeriodicalIF":3.7000,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"TSFI-fusion: A dual-branch decoupled infrared and visible image fusion network based on transformer and spatial-frequency interaction\",\"authors\":\"Shengchun Wang , Haowen Li , Lianye Liu , Ronghui Cai , Zhonghai Yin , Huijie Zhu\",\"doi\":\"10.1016/j.optlaseng.2025.109287\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Infrared and visible image fusion (IVIF) aims to generate high-quality images by combining detailed textures from visible images with the target-highlight capabilities of infrared images. However, many existing methods struggle to capture both shared and unique features of each modality. They often focus only on spatial domain fusion, such as pixel averaging, while overlooking valuable frequency domain information. This makes it hard to retain fine details. To overcome these limitations, we propose TSFI-Fusion, a dual-branch network that combines Transformer-based global understanding with spatial-frequency detail enhancement. The two branches include a Transformer-based semantic construction branch for capturing global features and a detail enhancement branch utilizing an invertible neural network (INN) and a frequency domain compensation module (FDCM) to integrate spatial and frequency information. We also design a dual-domain interaction module (DDIM) to improve feature correlation across domains and a collaborative information integration module (CIIM) to effectively merge features from both branches. Additionally, we introduce a focal frequency loss to guide the model in learning important frequency information. Experimental results demonstrate that TSFI-Fusion outperforms existing methods across multiple datasets and metrics on the IVIF task. In downstream applications such as object detection, it effectively enhances performance. Furthermore, extended experiments on the MIF task reveal the robust generalization ability of the proposed mechanism across diverse fusion scenarios. Our code will be available at <span><span>https://github.com/lihaowen0109/TSFI-Fusion</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":49719,\"journal\":{\"name\":\"Optics and Lasers in Engineering\",\"volume\":\"195 \",\"pages\":\"Article 109287\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2025-08-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optics and Lasers in Engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0143816625004725\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OPTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics and Lasers in Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0143816625004725","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPTICS","Score":null,"Total":0}
TSFI-fusion: A dual-branch decoupled infrared and visible image fusion network based on transformer and spatial-frequency interaction
Infrared and visible image fusion (IVIF) aims to generate high-quality images by combining detailed textures from visible images with the target-highlight capabilities of infrared images. However, many existing methods struggle to capture both shared and unique features of each modality. They often focus only on spatial domain fusion, such as pixel averaging, while overlooking valuable frequency domain information. This makes it hard to retain fine details. To overcome these limitations, we propose TSFI-Fusion, a dual-branch network that combines Transformer-based global understanding with spatial-frequency detail enhancement. The two branches include a Transformer-based semantic construction branch for capturing global features and a detail enhancement branch utilizing an invertible neural network (INN) and a frequency domain compensation module (FDCM) to integrate spatial and frequency information. We also design a dual-domain interaction module (DDIM) to improve feature correlation across domains and a collaborative information integration module (CIIM) to effectively merge features from both branches. Additionally, we introduce a focal frequency loss to guide the model in learning important frequency information. Experimental results demonstrate that TSFI-Fusion outperforms existing methods across multiple datasets and metrics on the IVIF task. In downstream applications such as object detection, it effectively enhances performance. Furthermore, extended experiments on the MIF task reveal the robust generalization ability of the proposed mechanism across diverse fusion scenarios. Our code will be available at https://github.com/lihaowen0109/TSFI-Fusion.
期刊介绍:
Optics and Lasers in Engineering aims at providing an international forum for the interchange of information on the development of optical techniques and laser technology in engineering. Emphasis is placed on contributions targeted at the practical use of methods and devices, the development and enhancement of solutions and new theoretical concepts for experimental methods.
Optics and Lasers in Engineering reflects the main areas in which optical methods are being used and developed for an engineering environment. Manuscripts should offer clear evidence of novelty and significance. Papers focusing on parameter optimization or computational issues are not suitable. Similarly, papers focussed on an application rather than the optical method fall outside the journal''s scope. The scope of the journal is defined to include the following:
-Optical Metrology-
Optical Methods for 3D visualization and virtual engineering-
Optical Techniques for Microsystems-
Imaging, Microscopy and Adaptive Optics-
Computational Imaging-
Laser methods in manufacturing-
Integrated optical and photonic sensors-
Optics and Photonics in Life Science-
Hyperspectral and spectroscopic methods-
Infrared and Terahertz techniques