RQCMFuse: a reduced biquaternion-driven collaborative modeling network of infrared saliency and visible color-detail for infrared and visible image fusion

IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Shan Gai, Qiyao Liang, Yihao Ni
{"title":"RQCMFuse: a reduced biquaternion-driven collaborative modeling network of infrared saliency and visible color-detail for infrared and visible image fusion","authors":"Shan Gai,&nbsp;Qiyao Liang,&nbsp;Yihao Ni","doi":"10.1016/j.inffus.2025.103754","DOIUrl":null,"url":null,"abstract":"<div><div>Existing infrared and visible image fusion methods typically use a single-channel fusion strategy, limiting their ability to capture the interdependencies between multi-channel data. This leads to the inability to preserve both infrared saliency and visible color-detail simultaneously. Furthermore, most methods focus on spatial feature analysis, neglecting valuable frequency information and failing to fully explore frequency characteristics. To address these issues, we propose a novel fusion framework driven by reduced biquaternion (RQ), named RQCMFuse. This framework not only utilizes RQ to model infrared and visible information in a unified manner but also explores frequency characteristics for superior fusion performance. Specifically, our model is designed based on RQ, maintaining low parameter complexity while improving the coordination between infrared and visible features, thereby naturally preserving infrared saliency and visible color-detail. We also introduce an RQ-frequency collaborative block (RQFCB) to efficiently explore frequency characteristics and facilitate the fusion of RQ and frequency domain features. Additionally, we design the invertible downsampling block (IDB) and adaptive integration block (AIB). The IDB enables efficient multi-scale feature extraction without losing high-frequency information, while the AIB adaptively integrates different layers of RQ features, preserving both structural semantics and texture details. Extensive experiments on multiple datasets demonstrate the efficiency and generalization ability of our proposed method. The results show that RQCMFuse significantly enhances infrared saliency and visible color-detail, providing visually superior fusion outcomes that align with human visual perception. Code is available at <span><span>https://github.com/PPBBJL/RQCMFuse</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103754"},"PeriodicalIF":15.5000,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525008164","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Existing infrared and visible image fusion methods typically use a single-channel fusion strategy, limiting their ability to capture the interdependencies between multi-channel data. This leads to the inability to preserve both infrared saliency and visible color-detail simultaneously. Furthermore, most methods focus on spatial feature analysis, neglecting valuable frequency information and failing to fully explore frequency characteristics. To address these issues, we propose a novel fusion framework driven by reduced biquaternion (RQ), named RQCMFuse. This framework not only utilizes RQ to model infrared and visible information in a unified manner but also explores frequency characteristics for superior fusion performance. Specifically, our model is designed based on RQ, maintaining low parameter complexity while improving the coordination between infrared and visible features, thereby naturally preserving infrared saliency and visible color-detail. We also introduce an RQ-frequency collaborative block (RQFCB) to efficiently explore frequency characteristics and facilitate the fusion of RQ and frequency domain features. Additionally, we design the invertible downsampling block (IDB) and adaptive integration block (AIB). The IDB enables efficient multi-scale feature extraction without losing high-frequency information, while the AIB adaptively integrates different layers of RQ features, preserving both structural semantics and texture details. Extensive experiments on multiple datasets demonstrate the efficiency and generalization ability of our proposed method. The results show that RQCMFuse significantly enhances infrared saliency and visible color-detail, providing visually superior fusion outcomes that align with human visual perception. Code is available at https://github.com/PPBBJL/RQCMFuse.
RQCMFuse:用于红外和可见光图像融合的红外显著性和可见颜色细节的简化双四元数驱动的协同建模网络
现有的红外和可见光图像融合方法通常使用单通道融合策略,限制了它们捕获多通道数据之间相互依赖关系的能力。这导致无法同时保持红外显着性和可见颜色细节。此外,大多数方法侧重于空间特征分析,忽略了有价值的频率信息,未能充分挖掘频率特征。为了解决这些问题,我们提出了一种由简化双四元数(RQ)驱动的新型融合框架,命名为RQCMFuse。该框架不仅利用RQ以统一的方式对红外和可见光信息进行建模,而且还探索了具有优越融合性能的频率特性。具体来说,我们的模型是基于RQ设计的,在保持低参数复杂度的同时,提高了红外和可见光特征之间的协调性,从而自然地保留了红外显著性和可见色彩细节。我们还引入了RQ-频率协同块(RQFCB)来有效地探索频率特征,促进RQ和频域特征的融合。此外,我们还设计了可逆下采样模块(IDB)和自适应集成模块(AIB)。IDB能够在不丢失高频信息的情况下实现高效的多尺度特征提取,而AIB能够自适应地集成不同层的RQ特征,同时保留结构语义和纹理细节。在多个数据集上的大量实验证明了该方法的有效性和泛化能力。结果表明,RQCMFuse显著增强了红外显着性和可见色彩细节,提供了与人类视觉感知一致的视觉上优越的融合结果。代码可从https://github.com/PPBBJL/RQCMFuse获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信