{"title":"QuatFuse:基于四元数的正交表示学习,用于多模态图像融合","authors":"Weida Wang, Zhuowei Wang, Xingming Liao, Xuanxuan Ma, Siyue Xie, Genping Zhao, Lianglun Cheng","doi":"10.1016/j.infrared.2025.106202","DOIUrl":null,"url":null,"abstract":"<div><div>Multi-modality image fusion (MMIF) is a technique that integrates complementary information from different imaging modalities into a single image, aiming to generate a more comprehensive and information-rich integrated representation. The existing methods focus on using more complex network structures to improve the fusion performance of the model but ignore the correlation between different modal images. To solve this problem, we propose QuatFuse, a Quaternion-Based Orthogonal Representation Learning fusion method. This approach utilizes the mathematical properties of quaternions to model inter-modal relationships. Specifically, we introduce orthogonal geometric constraints and discrete cosine transformations to process redundant information and enhance features across various frequencies, effectively improving QuatFuse’s retention of key features. Fusing high-frequency and low-frequency information from multi-modal images after feature extraction is implemented in the quaternion domain, effectively mapping this processing procedure from the traditional real domain to a higher-dimensional representation space. To validate the robustness of QuatFuse, experiments on Infrared-Visible image fusion (IVF) and Medical image fusion (MIF) are conducted across 6 datasets (comprising 5 public datasets and 1 private dataset), with its performance being measured by eight distinct metrics. Our model achieved state-of-the-art (SOTA) performance on most evaluation metrics, demonstrating its superior fusion capabilities.</div></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":"152 ","pages":"Article 106202"},"PeriodicalIF":3.4000,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"QuatFuse: Quaternion-based orthogonal representation learning for multi-modal image fusion\",\"authors\":\"Weida Wang, Zhuowei Wang, Xingming Liao, Xuanxuan Ma, Siyue Xie, Genping Zhao, Lianglun Cheng\",\"doi\":\"10.1016/j.infrared.2025.106202\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Multi-modality image fusion (MMIF) is a technique that integrates complementary information from different imaging modalities into a single image, aiming to generate a more comprehensive and information-rich integrated representation. The existing methods focus on using more complex network structures to improve the fusion performance of the model but ignore the correlation between different modal images. To solve this problem, we propose QuatFuse, a Quaternion-Based Orthogonal Representation Learning fusion method. This approach utilizes the mathematical properties of quaternions to model inter-modal relationships. Specifically, we introduce orthogonal geometric constraints and discrete cosine transformations to process redundant information and enhance features across various frequencies, effectively improving QuatFuse’s retention of key features. Fusing high-frequency and low-frequency information from multi-modal images after feature extraction is implemented in the quaternion domain, effectively mapping this processing procedure from the traditional real domain to a higher-dimensional representation space. To validate the robustness of QuatFuse, experiments on Infrared-Visible image fusion (IVF) and Medical image fusion (MIF) are conducted across 6 datasets (comprising 5 public datasets and 1 private dataset), with its performance being measured by eight distinct metrics. Our model achieved state-of-the-art (SOTA) performance on most evaluation metrics, demonstrating its superior fusion capabilities.</div></div>\",\"PeriodicalId\":13549,\"journal\":{\"name\":\"Infrared Physics & Technology\",\"volume\":\"152 \",\"pages\":\"Article 106202\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-10-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Infrared Physics & Technology\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1350449525004955\",\"RegionNum\":3,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"INSTRUMENTS & INSTRUMENTATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Infrared Physics & Technology","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1350449525004955","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INSTRUMENTS & INSTRUMENTATION","Score":null,"Total":0}
QuatFuse: Quaternion-based orthogonal representation learning for multi-modal image fusion
Multi-modality image fusion (MMIF) is a technique that integrates complementary information from different imaging modalities into a single image, aiming to generate a more comprehensive and information-rich integrated representation. The existing methods focus on using more complex network structures to improve the fusion performance of the model but ignore the correlation between different modal images. To solve this problem, we propose QuatFuse, a Quaternion-Based Orthogonal Representation Learning fusion method. This approach utilizes the mathematical properties of quaternions to model inter-modal relationships. Specifically, we introduce orthogonal geometric constraints and discrete cosine transformations to process redundant information and enhance features across various frequencies, effectively improving QuatFuse’s retention of key features. Fusing high-frequency and low-frequency information from multi-modal images after feature extraction is implemented in the quaternion domain, effectively mapping this processing procedure from the traditional real domain to a higher-dimensional representation space. To validate the robustness of QuatFuse, experiments on Infrared-Visible image fusion (IVF) and Medical image fusion (MIF) are conducted across 6 datasets (comprising 5 public datasets and 1 private dataset), with its performance being measured by eight distinct metrics. Our model achieved state-of-the-art (SOTA) performance on most evaluation metrics, demonstrating its superior fusion capabilities.
期刊介绍:
The Journal covers the entire field of infrared physics and technology: theory, experiment, application, devices and instrumentation. Infrared'' is defined as covering the near, mid and far infrared (terahertz) regions from 0.75um (750nm) to 1mm (300GHz.) Submissions in the 300GHz to 100GHz region may be accepted at the editors discretion if their content is relevant to shorter wavelengths. Submissions must be primarily concerned with and directly relevant to this spectral region.
Its core topics can be summarized as the generation, propagation and detection, of infrared radiation; the associated optics, materials and devices; and its use in all fields of science, industry, engineering and medicine.
Infrared techniques occur in many different fields, notably spectroscopy and interferometry; material characterization and processing; atmospheric physics, astronomy and space research. Scientific aspects include lasers, quantum optics, quantum electronics, image processing and semiconductor physics. Some important applications are medical diagnostics and treatment, industrial inspection and environmental monitoring.