{"title":"Nighttime visible and infrared image fusion based on adversarial learning","authors":"Qiwen Shi, Zhizhong Xi, Huibin Li","doi":"10.1016/j.infrared.2024.105618","DOIUrl":null,"url":null,"abstract":"<div><div>The task of infrared–visible image fusion (IVIF) aims to integrate multi-modal complementary information and facilitate other downstream tasks, especially under some harsh circumstances. To tackle the challenges of preserving significant information and enhancing visual effects under nighttime conditions, we propose a novel IVIF method based on adversarial learning, namely AdvFusion. It consists of an autoencoder-based generator and a dual discriminator. In particular, the multi-scale features of source images are firstly extracted by ResNet, and then aggregated based on the attention mechanisms and nest connection strategy to generate the fused images. Meanwhile, a global and local dual discriminator structure is designed to minimize the distance between the illumination distributions of the reference images and fused images, which achieves contrast enhancement within fused images and helps to uncover hidden cues in darkness. Moreover, a color loss is utilized to maintain color balance of each fused image, while the widely used perceptual loss and gradient loss are employed to maintain content consistency between the source and fused images. Extensive experiments conducted on five datasets demonstrate that our AdvFusion can achieve promising results compared with the state-of-the-art IVIF methods in terms of both visual effects and quantitative metrics. Furthermore, AdvFusion can also boost the performance of semantic segmentation on MSRS dataset and object detection on M3FD dataset.</div></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":"144 ","pages":"Article 105618"},"PeriodicalIF":3.1000,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Infrared Physics & Technology","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1350449524005024","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INSTRUMENTS & INSTRUMENTATION","Score":null,"Total":0}
引用次数: 0
Abstract
The task of infrared–visible image fusion (IVIF) aims to integrate multi-modal complementary information and facilitate other downstream tasks, especially under some harsh circumstances. To tackle the challenges of preserving significant information and enhancing visual effects under nighttime conditions, we propose a novel IVIF method based on adversarial learning, namely AdvFusion. It consists of an autoencoder-based generator and a dual discriminator. In particular, the multi-scale features of source images are firstly extracted by ResNet, and then aggregated based on the attention mechanisms and nest connection strategy to generate the fused images. Meanwhile, a global and local dual discriminator structure is designed to minimize the distance between the illumination distributions of the reference images and fused images, which achieves contrast enhancement within fused images and helps to uncover hidden cues in darkness. Moreover, a color loss is utilized to maintain color balance of each fused image, while the widely used perceptual loss and gradient loss are employed to maintain content consistency between the source and fused images. Extensive experiments conducted on five datasets demonstrate that our AdvFusion can achieve promising results compared with the state-of-the-art IVIF methods in terms of both visual effects and quantitative metrics. Furthermore, AdvFusion can also boost the performance of semantic segmentation on MSRS dataset and object detection on M3FD dataset.
期刊介绍:
The Journal covers the entire field of infrared physics and technology: theory, experiment, application, devices and instrumentation. Infrared'' is defined as covering the near, mid and far infrared (terahertz) regions from 0.75um (750nm) to 1mm (300GHz.) Submissions in the 300GHz to 100GHz region may be accepted at the editors discretion if their content is relevant to shorter wavelengths. Submissions must be primarily concerned with and directly relevant to this spectral region.
Its core topics can be summarized as the generation, propagation and detection, of infrared radiation; the associated optics, materials and devices; and its use in all fields of science, industry, engineering and medicine.
Infrared techniques occur in many different fields, notably spectroscopy and interferometry; material characterization and processing; atmospheric physics, astronomy and space research. Scientific aspects include lasers, quantum optics, quantum electronics, image processing and semiconductor physics. Some important applications are medical diagnostics and treatment, industrial inspection and environmental monitoring.