{"title":"RQCMFuse:用于红外和可见光图像融合的红外显著性和可见颜色细节的简化双四元数驱动的协同建模网络","authors":"Shan Gai, Qiyao Liang, Yihao Ni","doi":"10.1016/j.inffus.2025.103754","DOIUrl":null,"url":null,"abstract":"<div><div>Existing infrared and visible image fusion methods typically use a single-channel fusion strategy, limiting their ability to capture the interdependencies between multi-channel data. This leads to the inability to preserve both infrared saliency and visible color-detail simultaneously. Furthermore, most methods focus on spatial feature analysis, neglecting valuable frequency information and failing to fully explore frequency characteristics. To address these issues, we propose a novel fusion framework driven by reduced biquaternion (RQ), named RQCMFuse. This framework not only utilizes RQ to model infrared and visible information in a unified manner but also explores frequency characteristics for superior fusion performance. Specifically, our model is designed based on RQ, maintaining low parameter complexity while improving the coordination between infrared and visible features, thereby naturally preserving infrared saliency and visible color-detail. We also introduce an RQ-frequency collaborative block (RQFCB) to efficiently explore frequency characteristics and facilitate the fusion of RQ and frequency domain features. Additionally, we design the invertible downsampling block (IDB) and adaptive integration block (AIB). The IDB enables efficient multi-scale feature extraction without losing high-frequency information, while the AIB adaptively integrates different layers of RQ features, preserving both structural semantics and texture details. Extensive experiments on multiple datasets demonstrate the efficiency and generalization ability of our proposed method. The results show that RQCMFuse significantly enhances infrared saliency and visible color-detail, providing visually superior fusion outcomes that align with human visual perception. Code is available at <span><span>https://github.com/PPBBJL/RQCMFuse</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103754"},"PeriodicalIF":15.5000,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"RQCMFuse: a reduced biquaternion-driven collaborative modeling network of infrared saliency and visible color-detail for infrared and visible image fusion\",\"authors\":\"Shan Gai, Qiyao Liang, Yihao Ni\",\"doi\":\"10.1016/j.inffus.2025.103754\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Existing infrared and visible image fusion methods typically use a single-channel fusion strategy, limiting their ability to capture the interdependencies between multi-channel data. This leads to the inability to preserve both infrared saliency and visible color-detail simultaneously. Furthermore, most methods focus on spatial feature analysis, neglecting valuable frequency information and failing to fully explore frequency characteristics. To address these issues, we propose a novel fusion framework driven by reduced biquaternion (RQ), named RQCMFuse. This framework not only utilizes RQ to model infrared and visible information in a unified manner but also explores frequency characteristics for superior fusion performance. Specifically, our model is designed based on RQ, maintaining low parameter complexity while improving the coordination between infrared and visible features, thereby naturally preserving infrared saliency and visible color-detail. We also introduce an RQ-frequency collaborative block (RQFCB) to efficiently explore frequency characteristics and facilitate the fusion of RQ and frequency domain features. Additionally, we design the invertible downsampling block (IDB) and adaptive integration block (AIB). The IDB enables efficient multi-scale feature extraction without losing high-frequency information, while the AIB adaptively integrates different layers of RQ features, preserving both structural semantics and texture details. Extensive experiments on multiple datasets demonstrate the efficiency and generalization ability of our proposed method. The results show that RQCMFuse significantly enhances infrared saliency and visible color-detail, providing visually superior fusion outcomes that align with human visual perception. Code is available at <span><span>https://github.com/PPBBJL/RQCMFuse</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"127 \",\"pages\":\"Article 103754\"},\"PeriodicalIF\":15.5000,\"publicationDate\":\"2025-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1566253525008164\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525008164","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
RQCMFuse: a reduced biquaternion-driven collaborative modeling network of infrared saliency and visible color-detail for infrared and visible image fusion
Existing infrared and visible image fusion methods typically use a single-channel fusion strategy, limiting their ability to capture the interdependencies between multi-channel data. This leads to the inability to preserve both infrared saliency and visible color-detail simultaneously. Furthermore, most methods focus on spatial feature analysis, neglecting valuable frequency information and failing to fully explore frequency characteristics. To address these issues, we propose a novel fusion framework driven by reduced biquaternion (RQ), named RQCMFuse. This framework not only utilizes RQ to model infrared and visible information in a unified manner but also explores frequency characteristics for superior fusion performance. Specifically, our model is designed based on RQ, maintaining low parameter complexity while improving the coordination between infrared and visible features, thereby naturally preserving infrared saliency and visible color-detail. We also introduce an RQ-frequency collaborative block (RQFCB) to efficiently explore frequency characteristics and facilitate the fusion of RQ and frequency domain features. Additionally, we design the invertible downsampling block (IDB) and adaptive integration block (AIB). The IDB enables efficient multi-scale feature extraction without losing high-frequency information, while the AIB adaptively integrates different layers of RQ features, preserving both structural semantics and texture details. Extensive experiments on multiple datasets demonstrate the efficiency and generalization ability of our proposed method. The results show that RQCMFuse significantly enhances infrared saliency and visible color-detail, providing visually superior fusion outcomes that align with human visual perception. Code is available at https://github.com/PPBBJL/RQCMFuse.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.