An unsupervised fusion method for infrared and visible image under low-light condition based on Generative Adversarial Networks

IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Shuai Yang, Yuan Gao, Shiwei Ma
{"title":"An unsupervised fusion method for infrared and visible image under low-light condition based on Generative Adversarial Networks","authors":"Shuai Yang,&nbsp;Yuan Gao,&nbsp;Shiwei Ma","doi":"10.1016/j.image.2025.117324","DOIUrl":null,"url":null,"abstract":"<div><div>The aim of fusing infrared and visible images is to achieve high-quality images by enhancing textural details and obtaining complementary benefits. However, the existing methods for fusing infrared and visible images are suitable only normal lighting scenes. The details of the visible image under low-light conditions are not discernible. Achieving complementarity between the image contours and textural details is challenging between the infrared image and the visible image. With the intention of addressing the challenge of poor quality of infrared and visible light fusion images under low light conditions, a novel unsupervised fusion method for infrared and visible image under low_light condition (referred to as UFIVL) is presented in this paper. Specifically, the proposed method effectively enhances the low-light regions of visible light images while reducing noise. To incorporate style features of the image into the reconstruction of content features, a sparse-connection dense structure is designed. An adaptive contrast-limited histogram equalization loss function is introduced to improve contrast and brightness in the fused image. The joint gradient loss is proposed to extract clearer texture features under low-light conditions. This end-to-end method generates fused images with enhanced contrast and rich details. Furthermore, considering the issues in existing public datasets, a dataset for individuals and objects in low-light conditions (LLHO <span><span>https://github.com/alex551781/LLHO</span><svg><path></path></svg></span>) is proposed. On the ground of the experimental results, we can conclude that the proposed method generates fusion images with higher subjective and objective quantification scores on both the LLVIP public dataset and the LLHO self-built dataset. Additionally, we apply the fusion images generated by UFIVL method to the advanced computer vision task of target detection, resulting in a significant improvement in detection performance.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"138 ","pages":"Article 117324"},"PeriodicalIF":3.4000,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Signal Processing-Image Communication","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0923596525000712","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

The aim of fusing infrared and visible images is to achieve high-quality images by enhancing textural details and obtaining complementary benefits. However, the existing methods for fusing infrared and visible images are suitable only normal lighting scenes. The details of the visible image under low-light conditions are not discernible. Achieving complementarity between the image contours and textural details is challenging between the infrared image and the visible image. With the intention of addressing the challenge of poor quality of infrared and visible light fusion images under low light conditions, a novel unsupervised fusion method for infrared and visible image under low_light condition (referred to as UFIVL) is presented in this paper. Specifically, the proposed method effectively enhances the low-light regions of visible light images while reducing noise. To incorporate style features of the image into the reconstruction of content features, a sparse-connection dense structure is designed. An adaptive contrast-limited histogram equalization loss function is introduced to improve contrast and brightness in the fused image. The joint gradient loss is proposed to extract clearer texture features under low-light conditions. This end-to-end method generates fused images with enhanced contrast and rich details. Furthermore, considering the issues in existing public datasets, a dataset for individuals and objects in low-light conditions (LLHO https://github.com/alex551781/LLHO) is proposed. On the ground of the experimental results, we can conclude that the proposed method generates fusion images with higher subjective and objective quantification scores on both the LLVIP public dataset and the LLHO self-built dataset. Additionally, we apply the fusion images generated by UFIVL method to the advanced computer vision task of target detection, resulting in a significant improvement in detection performance.
基于生成对抗网络的弱光条件下红外与可见光图像无监督融合方法
红外和可见光图像融合的目的是通过增强纹理细节,获得互补的效果,从而获得高质量的图像。然而,现有的红外和可见光图像融合方法只适用于正常照明场景。弱光条件下可见图像的细节是看不出来的。实现图像轮廓和纹理细节之间的互补是红外图像和可见光图像之间的挑战。针对低光条件下红外与可见光融合图像质量差的问题,提出了一种新的低光条件下红外与可见光图像无监督融合方法(UFIVL)。具体而言,该方法在降低噪声的同时,有效地增强了可见光图像的低光区域。为了将图像的风格特征融入到内容特征的重构中,设计了稀疏连接的密集结构。引入自适应对比度限制直方图均衡化损失函数,提高融合图像的对比度和亮度。为了在弱光条件下提取更清晰的纹理特征,提出了联合梯度损失。这种端到端方法生成的融合图像具有增强的对比度和丰富的细节。此外,考虑到现有公共数据集存在的问题,提出了微光条件下个体和物体的数据集(LLHO https://github.com/alex551781/LLHO)。实验结果表明,该方法在LLVIP公共数据集和LLHO自建数据集上生成的融合图像主客观量化得分均较高。此外,我们将UFIVL方法生成的融合图像应用于目标检测的高级计算机视觉任务中,显著提高了检测性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Signal Processing-Image Communication
Signal Processing-Image Communication 工程技术-工程:电子与电气
CiteScore
8.40
自引率
2.90%
发文量
138
审稿时长
5.2 months
期刊介绍: Signal Processing: Image Communication is an international journal for the development of the theory and practice of image communication. Its primary objectives are the following: To present a forum for the advancement of theory and practice of image communication. To stimulate cross-fertilization between areas similar in nature which have traditionally been separated, for example, various aspects of visual communications and information systems. To contribute to a rapid information exchange between the industrial and academic environments. The editorial policy and the technical content of the journal are the responsibility of the Editor-in-Chief, the Area Editors and the Advisory Editors. The Journal is self-supporting from subscription income and contains a minimum amount of advertisements. Advertisements are subject to the prior approval of the Editor-in-Chief. The journal welcomes contributions from every country in the world. Signal Processing: Image Communication publishes articles relating to aspects of the design, implementation and use of image communication systems. The journal features original research work, tutorial and review articles, and accounts of practical developments. Subjects of interest include image/video coding, 3D video representations and compression, 3D graphics and animation compression, HDTV and 3DTV systems, video adaptation, video over IP, peer-to-peer video networking, interactive visual communication, multi-user video conferencing, wireless video broadcasting and communication, visual surveillance, 2D and 3D image/video quality measures, pre/post processing, video restoration and super-resolution, multi-camera video analysis, motion analysis, content-based image/video indexing and retrieval, face and gesture processing, video synthesis, 2D and 3D image/video acquisition and display technologies, architectures for image/video processing and communication.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信