Xuesong Wang, Bin Zhou, Jian Peng, Feng Huang, Xianyu Wu
{"title":"Enhancing three-source cross-modality image fusion with improved DenseNet for infrared polarization and visible light images","authors":"Xuesong Wang, Bin Zhou, Jian Peng, Feng Huang, Xianyu Wu","doi":"10.1016/j.infrared.2024.105493","DOIUrl":null,"url":null,"abstract":"<div><p>The fusion of multi-modal images to create an image that preserves the unique features of each modality as well as the features shared across modalities is a challenging task, particularly in the context of infrared (IR)-visible image fusion. In addition, the presence of polarization and IR radiation information in images obtained from IR polarization sensors further complicates the multi-modal image-fusion process. This study proposes a fusion network designed to overcome the challenges associated with the integration of low-resolution IR, IR polarization, and high-resolution visible (VIS) images. By introducing cross attention modules and a multi-stage fusion approach, the network can effectively extract and fuse features from different modalities, fully expressing the diversity of the images. This network learns end-to-end mapping from sourced to fused images using a loss function, eliminating the need for ground-truth images for fusion. Experimental results using public datasets and remote-sensing field-test data demonstrate that the proposed methodology achieves commendable results in qualitative and quantitative evaluations, with gradient based fusion performance <span><math><mrow><msup><mrow><mi>Q</mi></mrow><mrow><mi>AB</mi><mo>/</mo><mi>F</mi></mrow></msup></mrow></math></span>, mutual information (MI), and <span><math><mrow><msub><mi>Q</mi><mrow><mi>CB</mi></mrow></msub></mrow></math></span> values higher than the second-best values by 0.20, 0.94, and 0.04, respectively. This study provides a comprehensive representation of target scene information that results in enhanced image quality and improved object identification capabilities. In addition, outdoor and VIS image datasets are produced, providing a data foundation and reference for future research in related fields.</p></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":"141 ","pages":"Article 105493"},"PeriodicalIF":3.1000,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Infrared Physics & Technology","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1350449524003773","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INSTRUMENTS & INSTRUMENTATION","Score":null,"Total":0}
引用次数: 0
Abstract
The fusion of multi-modal images to create an image that preserves the unique features of each modality as well as the features shared across modalities is a challenging task, particularly in the context of infrared (IR)-visible image fusion. In addition, the presence of polarization and IR radiation information in images obtained from IR polarization sensors further complicates the multi-modal image-fusion process. This study proposes a fusion network designed to overcome the challenges associated with the integration of low-resolution IR, IR polarization, and high-resolution visible (VIS) images. By introducing cross attention modules and a multi-stage fusion approach, the network can effectively extract and fuse features from different modalities, fully expressing the diversity of the images. This network learns end-to-end mapping from sourced to fused images using a loss function, eliminating the need for ground-truth images for fusion. Experimental results using public datasets and remote-sensing field-test data demonstrate that the proposed methodology achieves commendable results in qualitative and quantitative evaluations, with gradient based fusion performance , mutual information (MI), and values higher than the second-best values by 0.20, 0.94, and 0.04, respectively. This study provides a comprehensive representation of target scene information that results in enhanced image quality and improved object identification capabilities. In addition, outdoor and VIS image datasets are produced, providing a data foundation and reference for future research in related fields.
期刊介绍:
The Journal covers the entire field of infrared physics and technology: theory, experiment, application, devices and instrumentation. Infrared'' is defined as covering the near, mid and far infrared (terahertz) regions from 0.75um (750nm) to 1mm (300GHz.) Submissions in the 300GHz to 100GHz region may be accepted at the editors discretion if their content is relevant to shorter wavelengths. Submissions must be primarily concerned with and directly relevant to this spectral region.
Its core topics can be summarized as the generation, propagation and detection, of infrared radiation; the associated optics, materials and devices; and its use in all fields of science, industry, engineering and medicine.
Infrared techniques occur in many different fields, notably spectroscopy and interferometry; material characterization and processing; atmospheric physics, astronomy and space research. Scientific aspects include lasers, quantum optics, quantum electronics, image processing and semiconductor physics. Some important applications are medical diagnostics and treatment, industrial inspection and environmental monitoring.