Liye Mei , Xinglong Hu , Zhaoyi Ye , Zhiwei Ye , Chuan Xu , Sheng Liu , Cheng Lei
{"title":"Visual fidelity and full-scale interaction driven network for infrared and visible image fusion","authors":"Liye Mei , Xinglong Hu , Zhaoyi Ye , Zhiwei Ye , Chuan Xu , Sheng Liu , Cheng Lei","doi":"10.1016/j.patcog.2025.111612","DOIUrl":null,"url":null,"abstract":"<div><div>The objective of infrared and visible image fusion is to combine the unique strengths of source images into a single image that serves human visual perception and machine detection. The existing fusion networks are still lacking in the effective characterization and retention of source image features. To counter these deficiencies, we propose a visual fidelity and full-scale interaction driven network for infrared and visible image fusion, named VFFusion. First, a multi-scale feature encoder based on BiFormer is constructed, and a feature cascade interaction module is designed to perform full-scale interaction on features distributed across different scales. In addition, a visual fidelity branch is built to process multi-scale features in parallel with the fusion branch. Specifically, the visual fidelity branch uses blurred images for self-supervised training in the constructed auxiliary task, thereby obtaining an effective representation of the source image information. By exploring the complementary representational features of infrared and visible images as supervisory information, it constrains the fusion branch to retain the source image features in the fused image. Notably, the visual fidelity branch employs a multi-scale joint reconstruction loss, utilizing the rich supervisory signals provided by multi-scale original images to enhance the feature representation of targets at different scales, resulting in clear fusion of the targets. Extensive qualitative and quantitative comparative experiments are conducted on four datasets against nine advanced methods, demonstrating the superiority of our approach. The source code is available at <span><span>https://github.com/XingLongH/VFFusion</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"165 ","pages":"Article 111612"},"PeriodicalIF":7.5000,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325002729","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The objective of infrared and visible image fusion is to combine the unique strengths of source images into a single image that serves human visual perception and machine detection. The existing fusion networks are still lacking in the effective characterization and retention of source image features. To counter these deficiencies, we propose a visual fidelity and full-scale interaction driven network for infrared and visible image fusion, named VFFusion. First, a multi-scale feature encoder based on BiFormer is constructed, and a feature cascade interaction module is designed to perform full-scale interaction on features distributed across different scales. In addition, a visual fidelity branch is built to process multi-scale features in parallel with the fusion branch. Specifically, the visual fidelity branch uses blurred images for self-supervised training in the constructed auxiliary task, thereby obtaining an effective representation of the source image information. By exploring the complementary representational features of infrared and visible images as supervisory information, it constrains the fusion branch to retain the source image features in the fused image. Notably, the visual fidelity branch employs a multi-scale joint reconstruction loss, utilizing the rich supervisory signals provided by multi-scale original images to enhance the feature representation of targets at different scales, resulting in clear fusion of the targets. Extensive qualitative and quantitative comparative experiments are conducted on four datasets against nine advanced methods, demonstrating the superiority of our approach. The source code is available at https://github.com/XingLongH/VFFusion.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.