UIRGBfuse:从红外通道与 R、G 和 B 通道的统一融合重新审视红外和可见光图像融合

IF 3.1 3区 物理与天体物理 Q2 INSTRUMENTS & INSTRUMENTATION
Shi Yi , Si Guo , Mengting Chen , Jiashuai Wang , Yong Jia
{"title":"UIRGBfuse:从红外通道与 R、G 和 B 通道的统一融合重新审视红外和可见光图像融合","authors":"Shi Yi ,&nbsp;Si Guo ,&nbsp;Mengting Chen ,&nbsp;Jiashuai Wang ,&nbsp;Yong Jia","doi":"10.1016/j.infrared.2024.105626","DOIUrl":null,"url":null,"abstract":"<div><div>Infrared and visible image fusion aims to obtain fused images with complementary information from infrared and visible modalities. The visible image captured by the visible spectrum camera consists of R, G, and B channels, exhibiting color information. However, existing fusion frameworks for infrared and visible images typically treat the fusion task as the fusion of infrared images with single-channel grayscale visible images. This approach neglects the fact that different gradient distributions between R, G, and B channels of RGB visible images, which can result in unnatural fusion effects, distortion, poor preservation of details from source images, and degradation of color fidelity. To achieve superior fusion performance in infrared and RGB visible image fusion, a unified fusion framework called UIRGBfuse is proposed in this study. It fused the infrared image with the R, G, and B channels through a unified fusion approach, along with an IR-RGB joint fusion learning strategy that has been designed to ensure natural and outstanding fusion results. The UIRGBfuse consists of separate branches for feature extraction and feature fusion, creating a cohesive architecture for fusing the infrared channel with the R, G, and B channels. Additionally, the training process is guided by R, G, and B fusion losses as part of the devised IR-RGB joint fusion learning strategy. In addition, this study implements the frequency domain compensate feature fusion module to achieve desirable feature fusion performance by the compensate features obtained from the frequency domain. Furthermore, the hybrid CNN-Transformer deep feature refinement module is realized in this study to refine the deep fused features obtained from the fusion branches, thereby further enhancing the fusion performance of UIRGBfuse. Moreover, to address color fidelity distortion observed in infrared and RGB visible image fusion, an adaptive cross-feature fusion reconstructor with the capability of adaptively fusing multi-branch fusion features is constructed in this work. Ablation studies have been conducted on publicly available datasets to validate the effectiveness of the proposed unified fusion architecture, IR-RGB joint fusion learning strategy, feature fusion and refinement modules, and reconstructor. The superiority of the proposed UIRGBfuse over other representative state-of-the-art infrared and visible image fusion methods in terms of natural fusion, retention of source image details, and color fidelity has been demonstrated through comparison and generalization experiments. Finally, object detection experiments have shown that the fused images obtained by UIRGBfuse are capable of successfully detecting more targets than other competitors.</div></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":"143 ","pages":"Article 105626"},"PeriodicalIF":3.1000,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"UIRGBfuse: Revisiting infrared and visible image fusion from the unified fusion of infrared channel with R, G, and B channels\",\"authors\":\"Shi Yi ,&nbsp;Si Guo ,&nbsp;Mengting Chen ,&nbsp;Jiashuai Wang ,&nbsp;Yong Jia\",\"doi\":\"10.1016/j.infrared.2024.105626\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Infrared and visible image fusion aims to obtain fused images with complementary information from infrared and visible modalities. The visible image captured by the visible spectrum camera consists of R, G, and B channels, exhibiting color information. However, existing fusion frameworks for infrared and visible images typically treat the fusion task as the fusion of infrared images with single-channel grayscale visible images. This approach neglects the fact that different gradient distributions between R, G, and B channels of RGB visible images, which can result in unnatural fusion effects, distortion, poor preservation of details from source images, and degradation of color fidelity. To achieve superior fusion performance in infrared and RGB visible image fusion, a unified fusion framework called UIRGBfuse is proposed in this study. It fused the infrared image with the R, G, and B channels through a unified fusion approach, along with an IR-RGB joint fusion learning strategy that has been designed to ensure natural and outstanding fusion results. The UIRGBfuse consists of separate branches for feature extraction and feature fusion, creating a cohesive architecture for fusing the infrared channel with the R, G, and B channels. Additionally, the training process is guided by R, G, and B fusion losses as part of the devised IR-RGB joint fusion learning strategy. In addition, this study implements the frequency domain compensate feature fusion module to achieve desirable feature fusion performance by the compensate features obtained from the frequency domain. Furthermore, the hybrid CNN-Transformer deep feature refinement module is realized in this study to refine the deep fused features obtained from the fusion branches, thereby further enhancing the fusion performance of UIRGBfuse. Moreover, to address color fidelity distortion observed in infrared and RGB visible image fusion, an adaptive cross-feature fusion reconstructor with the capability of adaptively fusing multi-branch fusion features is constructed in this work. Ablation studies have been conducted on publicly available datasets to validate the effectiveness of the proposed unified fusion architecture, IR-RGB joint fusion learning strategy, feature fusion and refinement modules, and reconstructor. The superiority of the proposed UIRGBfuse over other representative state-of-the-art infrared and visible image fusion methods in terms of natural fusion, retention of source image details, and color fidelity has been demonstrated through comparison and generalization experiments. Finally, object detection experiments have shown that the fused images obtained by UIRGBfuse are capable of successfully detecting more targets than other competitors.</div></div>\",\"PeriodicalId\":13549,\"journal\":{\"name\":\"Infrared Physics & Technology\",\"volume\":\"143 \",\"pages\":\"Article 105626\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-11-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Infrared Physics & Technology\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1350449524005103\",\"RegionNum\":3,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"INSTRUMENTS & INSTRUMENTATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Infrared Physics & Technology","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1350449524005103","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INSTRUMENTS & INSTRUMENTATION","Score":null,"Total":0}
引用次数: 0

摘要

红外和可见光图像融合的目的是获得具有红外和可见光模式互补信息的融合图像。可见光谱相机拍摄的可见光图像由 R、G 和 B 三个通道组成,显示了色彩信息。然而,现有的红外和可见光图像融合框架通常将融合任务视为红外图像与单通道灰度可见光图像的融合。这种方法忽略了 RGB 可见光图像的 R、G 和 B 通道之间存在不同梯度分布的事实,可能会导致不自然的融合效果、失真、源图像细节保存不佳以及色彩保真度下降。为了在红外图像和 RGB 可见光图像融合中实现更优越的融合性能,本研究提出了一种名为 UIRGBfuse 的统一融合框架。它通过统一的融合方法将红外图像与 R、G 和 B 信道融合在一起,并设计了红外-RGB 联合融合学习策略,以确保自然而出色的融合效果。UIRGBfuse 由用于特征提取和特征融合的独立分支组成,为红外通道与 R、G 和 B 通道的融合创建了一个连贯的架构。此外,作为所设计的 IR-RGB 联合融合学习策略的一部分,训练过程以 R、G 和 B 融合损失为指导。此外,本研究还实施了频域补偿特征融合模块,通过从频域获得的补偿特征实现理想的特征融合性能。此外,本研究还实现了混合 CNN-Transformer 深度特征细化模块,以细化从融合分支获得的深度融合特征,从而进一步提高 UIRGBfuse 的融合性能。此外,为了解决在红外和 RGB 可见光图像融合中观察到的色彩保真度失真问题,本研究还构建了一个自适应交叉特征融合重建器,该重建器具有自适应融合多分支融合特征的能力。为了验证所提出的统一融合架构、红外-红外-可见光联合融合学习策略、特征融合和细化模块以及重构器的有效性,我们在公开数据集上进行了消融研究。通过对比和归纳实验,证明了所提出的 UIRGBfuse 在自然融合、保留源图像细节和色彩保真度方面优于其他具有代表性的先进红外和可见光图像融合方法。最后,物体检测实验表明,与其他竞争者相比,UIRGBfuse 所获得的融合图像能够成功检测到更多目标。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
UIRGBfuse: Revisiting infrared and visible image fusion from the unified fusion of infrared channel with R, G, and B channels
Infrared and visible image fusion aims to obtain fused images with complementary information from infrared and visible modalities. The visible image captured by the visible spectrum camera consists of R, G, and B channels, exhibiting color information. However, existing fusion frameworks for infrared and visible images typically treat the fusion task as the fusion of infrared images with single-channel grayscale visible images. This approach neglects the fact that different gradient distributions between R, G, and B channels of RGB visible images, which can result in unnatural fusion effects, distortion, poor preservation of details from source images, and degradation of color fidelity. To achieve superior fusion performance in infrared and RGB visible image fusion, a unified fusion framework called UIRGBfuse is proposed in this study. It fused the infrared image with the R, G, and B channels through a unified fusion approach, along with an IR-RGB joint fusion learning strategy that has been designed to ensure natural and outstanding fusion results. The UIRGBfuse consists of separate branches for feature extraction and feature fusion, creating a cohesive architecture for fusing the infrared channel with the R, G, and B channels. Additionally, the training process is guided by R, G, and B fusion losses as part of the devised IR-RGB joint fusion learning strategy. In addition, this study implements the frequency domain compensate feature fusion module to achieve desirable feature fusion performance by the compensate features obtained from the frequency domain. Furthermore, the hybrid CNN-Transformer deep feature refinement module is realized in this study to refine the deep fused features obtained from the fusion branches, thereby further enhancing the fusion performance of UIRGBfuse. Moreover, to address color fidelity distortion observed in infrared and RGB visible image fusion, an adaptive cross-feature fusion reconstructor with the capability of adaptively fusing multi-branch fusion features is constructed in this work. Ablation studies have been conducted on publicly available datasets to validate the effectiveness of the proposed unified fusion architecture, IR-RGB joint fusion learning strategy, feature fusion and refinement modules, and reconstructor. The superiority of the proposed UIRGBfuse over other representative state-of-the-art infrared and visible image fusion methods in terms of natural fusion, retention of source image details, and color fidelity has been demonstrated through comparison and generalization experiments. Finally, object detection experiments have shown that the fused images obtained by UIRGBfuse are capable of successfully detecting more targets than other competitors.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.70
自引率
12.10%
发文量
400
审稿时长
67 days
期刊介绍: The Journal covers the entire field of infrared physics and technology: theory, experiment, application, devices and instrumentation. Infrared'' is defined as covering the near, mid and far infrared (terahertz) regions from 0.75um (750nm) to 1mm (300GHz.) Submissions in the 300GHz to 100GHz region may be accepted at the editors discretion if their content is relevant to shorter wavelengths. Submissions must be primarily concerned with and directly relevant to this spectral region. Its core topics can be summarized as the generation, propagation and detection, of infrared radiation; the associated optics, materials and devices; and its use in all fields of science, industry, engineering and medicine. Infrared techniques occur in many different fields, notably spectroscopy and interferometry; material characterization and processing; atmospheric physics, astronomy and space research. Scientific aspects include lasers, quantum optics, quantum electronics, image processing and semiconductor physics. Some important applications are medical diagnostics and treatment, industrial inspection and environmental monitoring.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信