{"title":"基于生成对抗网络的弱光条件下红外与可见光图像无监督融合方法","authors":"Shuai Yang, Yuan Gao, Shiwei Ma","doi":"10.1016/j.image.2025.117324","DOIUrl":null,"url":null,"abstract":"<div><div>The aim of fusing infrared and visible images is to achieve high-quality images by enhancing textural details and obtaining complementary benefits. However, the existing methods for fusing infrared and visible images are suitable only normal lighting scenes. The details of the visible image under low-light conditions are not discernible. Achieving complementarity between the image contours and textural details is challenging between the infrared image and the visible image. With the intention of addressing the challenge of poor quality of infrared and visible light fusion images under low light conditions, a novel unsupervised fusion method for infrared and visible image under low_light condition (referred to as UFIVL) is presented in this paper. Specifically, the proposed method effectively enhances the low-light regions of visible light images while reducing noise. To incorporate style features of the image into the reconstruction of content features, a sparse-connection dense structure is designed. An adaptive contrast-limited histogram equalization loss function is introduced to improve contrast and brightness in the fused image. The joint gradient loss is proposed to extract clearer texture features under low-light conditions. This end-to-end method generates fused images with enhanced contrast and rich details. Furthermore, considering the issues in existing public datasets, a dataset for individuals and objects in low-light conditions (LLHO <span><span>https://github.com/alex551781/LLHO</span><svg><path></path></svg></span>) is proposed. On the ground of the experimental results, we can conclude that the proposed method generates fusion images with higher subjective and objective quantification scores on both the LLVIP public dataset and the LLHO self-built dataset. Additionally, we apply the fusion images generated by UFIVL method to the advanced computer vision task of target detection, resulting in a significant improvement in detection performance.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"138 ","pages":"Article 117324"},"PeriodicalIF":3.4000,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An unsupervised fusion method for infrared and visible image under low-light condition based on Generative Adversarial Networks\",\"authors\":\"Shuai Yang, Yuan Gao, Shiwei Ma\",\"doi\":\"10.1016/j.image.2025.117324\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The aim of fusing infrared and visible images is to achieve high-quality images by enhancing textural details and obtaining complementary benefits. However, the existing methods for fusing infrared and visible images are suitable only normal lighting scenes. The details of the visible image under low-light conditions are not discernible. Achieving complementarity between the image contours and textural details is challenging between the infrared image and the visible image. With the intention of addressing the challenge of poor quality of infrared and visible light fusion images under low light conditions, a novel unsupervised fusion method for infrared and visible image under low_light condition (referred to as UFIVL) is presented in this paper. Specifically, the proposed method effectively enhances the low-light regions of visible light images while reducing noise. To incorporate style features of the image into the reconstruction of content features, a sparse-connection dense structure is designed. An adaptive contrast-limited histogram equalization loss function is introduced to improve contrast and brightness in the fused image. The joint gradient loss is proposed to extract clearer texture features under low-light conditions. This end-to-end method generates fused images with enhanced contrast and rich details. Furthermore, considering the issues in existing public datasets, a dataset for individuals and objects in low-light conditions (LLHO <span><span>https://github.com/alex551781/LLHO</span><svg><path></path></svg></span>) is proposed. On the ground of the experimental results, we can conclude that the proposed method generates fusion images with higher subjective and objective quantification scores on both the LLVIP public dataset and the LLHO self-built dataset. Additionally, we apply the fusion images generated by UFIVL method to the advanced computer vision task of target detection, resulting in a significant improvement in detection performance.</div></div>\",\"PeriodicalId\":49521,\"journal\":{\"name\":\"Signal Processing-Image Communication\",\"volume\":\"138 \",\"pages\":\"Article 117324\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-04-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Signal Processing-Image Communication\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0923596525000712\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Signal Processing-Image Communication","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0923596525000712","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
An unsupervised fusion method for infrared and visible image under low-light condition based on Generative Adversarial Networks
The aim of fusing infrared and visible images is to achieve high-quality images by enhancing textural details and obtaining complementary benefits. However, the existing methods for fusing infrared and visible images are suitable only normal lighting scenes. The details of the visible image under low-light conditions are not discernible. Achieving complementarity between the image contours and textural details is challenging between the infrared image and the visible image. With the intention of addressing the challenge of poor quality of infrared and visible light fusion images under low light conditions, a novel unsupervised fusion method for infrared and visible image under low_light condition (referred to as UFIVL) is presented in this paper. Specifically, the proposed method effectively enhances the low-light regions of visible light images while reducing noise. To incorporate style features of the image into the reconstruction of content features, a sparse-connection dense structure is designed. An adaptive contrast-limited histogram equalization loss function is introduced to improve contrast and brightness in the fused image. The joint gradient loss is proposed to extract clearer texture features under low-light conditions. This end-to-end method generates fused images with enhanced contrast and rich details. Furthermore, considering the issues in existing public datasets, a dataset for individuals and objects in low-light conditions (LLHO https://github.com/alex551781/LLHO) is proposed. On the ground of the experimental results, we can conclude that the proposed method generates fusion images with higher subjective and objective quantification scores on both the LLVIP public dataset and the LLHO self-built dataset. Additionally, we apply the fusion images generated by UFIVL method to the advanced computer vision task of target detection, resulting in a significant improvement in detection performance.
期刊介绍:
Signal Processing: Image Communication is an international journal for the development of the theory and practice of image communication. Its primary objectives are the following:
To present a forum for the advancement of theory and practice of image communication.
To stimulate cross-fertilization between areas similar in nature which have traditionally been separated, for example, various aspects of visual communications and information systems.
To contribute to a rapid information exchange between the industrial and academic environments.
The editorial policy and the technical content of the journal are the responsibility of the Editor-in-Chief, the Area Editors and the Advisory Editors. The Journal is self-supporting from subscription income and contains a minimum amount of advertisements. Advertisements are subject to the prior approval of the Editor-in-Chief. The journal welcomes contributions from every country in the world.
Signal Processing: Image Communication publishes articles relating to aspects of the design, implementation and use of image communication systems. The journal features original research work, tutorial and review articles, and accounts of practical developments.
Subjects of interest include image/video coding, 3D video representations and compression, 3D graphics and animation compression, HDTV and 3DTV systems, video adaptation, video over IP, peer-to-peer video networking, interactive visual communication, multi-user video conferencing, wireless video broadcasting and communication, visual surveillance, 2D and 3D image/video quality measures, pre/post processing, video restoration and super-resolution, multi-camera video analysis, motion analysis, content-based image/video indexing and retrieval, face and gesture processing, video synthesis, 2D and 3D image/video acquisition and display technologies, architectures for image/video processing and communication.