Tianwei Zhou , Yanfeng Tang , Weida Zhan , Yu Chen , Yueyi Han , Deng Han
{"title":"RDAGAN: Residual Dense Module and Attention-Guided Generative Adversarial Network for infrared image generation","authors":"Tianwei Zhou , Yanfeng Tang , Weida Zhan , Yu Chen , Yueyi Han , Deng Han","doi":"10.1016/j.infrared.2024.105685","DOIUrl":null,"url":null,"abstract":"<div><div>Visible-to-Infrared image Translation (V2I) is fundamentally an ill-defined problem, since RGB images do not have any information about the thermal characteristics of different objects. In recent years, with the development of deep learning, infrared image generation has been widely studied, however, existing methods often suffer from the problems of incomplete structure and blurred details in the generated infrared images. For this reason, this paper proposes Residual Dense Module and Attention-Guided Generative Adversarial Networks (RDAGAN) to improve the generation quality of infrared images. RDAGAN incorporates several modules, firstly, we adopt Residual Dense Module (RDM), which improves the model feature extraction capability by enhancing the depth and width of the model. Second, in order to guide the model to focus on the key parts of the image, we designed Attention-Guided Module (AGM), which enable the model to learn and generate the key features of the infrared image more efficiently, thus generating a pseudo-image that is closer to the real infrared image. To further optimize the generated infrared images, we also propose a composite loss function combining the Adversarial loss, L1 loss, Perceptual loss, and SSIM loss, where the Perceptual loss significantly reduces the LPIPS value and improves the visual perceptual quality of the generated images, and the SSIM loss strengthens the edge texture details of the generated images and significantly improves the SSIM value. Experimental results on KAIST, FLIR and LLVIP datasets show that RDAGAN outperforms the existing methods in terms of performance metrics and visual quality, and generates clearer and more realistic infrared images.</div></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":"145 ","pages":"Article 105685"},"PeriodicalIF":3.1000,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Infrared Physics & Technology","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1350449524005693","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INSTRUMENTS & INSTRUMENTATION","Score":null,"Total":0}
引用次数: 0
Abstract
Visible-to-Infrared image Translation (V2I) is fundamentally an ill-defined problem, since RGB images do not have any information about the thermal characteristics of different objects. In recent years, with the development of deep learning, infrared image generation has been widely studied, however, existing methods often suffer from the problems of incomplete structure and blurred details in the generated infrared images. For this reason, this paper proposes Residual Dense Module and Attention-Guided Generative Adversarial Networks (RDAGAN) to improve the generation quality of infrared images. RDAGAN incorporates several modules, firstly, we adopt Residual Dense Module (RDM), which improves the model feature extraction capability by enhancing the depth and width of the model. Second, in order to guide the model to focus on the key parts of the image, we designed Attention-Guided Module (AGM), which enable the model to learn and generate the key features of the infrared image more efficiently, thus generating a pseudo-image that is closer to the real infrared image. To further optimize the generated infrared images, we also propose a composite loss function combining the Adversarial loss, L1 loss, Perceptual loss, and SSIM loss, where the Perceptual loss significantly reduces the LPIPS value and improves the visual perceptual quality of the generated images, and the SSIM loss strengthens the edge texture details of the generated images and significantly improves the SSIM value. Experimental results on KAIST, FLIR and LLVIP datasets show that RDAGAN outperforms the existing methods in terms of performance metrics and visual quality, and generates clearer and more realistic infrared images.
期刊介绍:
The Journal covers the entire field of infrared physics and technology: theory, experiment, application, devices and instrumentation. Infrared'' is defined as covering the near, mid and far infrared (terahertz) regions from 0.75um (750nm) to 1mm (300GHz.) Submissions in the 300GHz to 100GHz region may be accepted at the editors discretion if their content is relevant to shorter wavelengths. Submissions must be primarily concerned with and directly relevant to this spectral region.
Its core topics can be summarized as the generation, propagation and detection, of infrared radiation; the associated optics, materials and devices; and its use in all fields of science, industry, engineering and medicine.
Infrared techniques occur in many different fields, notably spectroscopy and interferometry; material characterization and processing; atmospheric physics, astronomy and space research. Scientific aspects include lasers, quantum optics, quantum electronics, image processing and semiconductor physics. Some important applications are medical diagnostics and treatment, industrial inspection and environmental monitoring.