{"title":"ConvNeXtFusion:利用残差密集和交叉ConvNeXt网络进行多传感器图像融合","authors":"Mohammed Zouaoui Laidouni, Boban Bondžulić, Dimitrije Bujaković, Touati Adli, Milenko Andrić","doi":"10.1016/j.infrared.2025.106005","DOIUrl":null,"url":null,"abstract":"<div><div>Multi-sensor image fusion enhances the visual perception ability by producing a single fused image that combines the information from different image modalities. This paper presents a novel framework for multi-sensor image fusion based on ConvNeXt network to fuse visible (VIS) and long-wavelength infrared (LWIR) images, with the additional capability to incorporate near-infrared (NIR) image. The framework introduces a residual dense ConvNeXt module specifically designed for dense feature extraction across different modalities. To further optimize the fusion process, a residual cross ConvNeXt module is developed to combine the extracted features. Therefore, maximizing the interaction between modalities and leading to a more informative fused image. To facilitate unsupervised training and ensure the accurate representation of combined modalities in the fused image, a loss function integrating frequency and gradient information is constructed. The proposed method is extensively validated through experiments on four distinct datasets, including both subjective evaluations and objective comparisons. The results demonstrate the proposed framework’s superiority over existing state-of-the-art image fusion algorithms, particularly highlighting its strong generalization capability in handling both LWIR+VIS and LWIR+NIR+VIS fusion tasks. Finally, the practical utility of the proposed method is further demonstrated through its application to object detection tasks.</div></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":"150 ","pages":"Article 106005"},"PeriodicalIF":3.4000,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ConvNeXtFusion: Multi-sensor image fusion via residual dense and cross ConvNeXt network\",\"authors\":\"Mohammed Zouaoui Laidouni, Boban Bondžulić, Dimitrije Bujaković, Touati Adli, Milenko Andrić\",\"doi\":\"10.1016/j.infrared.2025.106005\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Multi-sensor image fusion enhances the visual perception ability by producing a single fused image that combines the information from different image modalities. This paper presents a novel framework for multi-sensor image fusion based on ConvNeXt network to fuse visible (VIS) and long-wavelength infrared (LWIR) images, with the additional capability to incorporate near-infrared (NIR) image. The framework introduces a residual dense ConvNeXt module specifically designed for dense feature extraction across different modalities. To further optimize the fusion process, a residual cross ConvNeXt module is developed to combine the extracted features. Therefore, maximizing the interaction between modalities and leading to a more informative fused image. To facilitate unsupervised training and ensure the accurate representation of combined modalities in the fused image, a loss function integrating frequency and gradient information is constructed. The proposed method is extensively validated through experiments on four distinct datasets, including both subjective evaluations and objective comparisons. The results demonstrate the proposed framework’s superiority over existing state-of-the-art image fusion algorithms, particularly highlighting its strong generalization capability in handling both LWIR+VIS and LWIR+NIR+VIS fusion tasks. Finally, the practical utility of the proposed method is further demonstrated through its application to object detection tasks.</div></div>\",\"PeriodicalId\":13549,\"journal\":{\"name\":\"Infrared Physics & Technology\",\"volume\":\"150 \",\"pages\":\"Article 106005\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-07-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Infrared Physics & Technology\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1350449525002981\",\"RegionNum\":3,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"INSTRUMENTS & INSTRUMENTATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Infrared Physics & Technology","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1350449525002981","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INSTRUMENTS & INSTRUMENTATION","Score":null,"Total":0}
ConvNeXtFusion: Multi-sensor image fusion via residual dense and cross ConvNeXt network
Multi-sensor image fusion enhances the visual perception ability by producing a single fused image that combines the information from different image modalities. This paper presents a novel framework for multi-sensor image fusion based on ConvNeXt network to fuse visible (VIS) and long-wavelength infrared (LWIR) images, with the additional capability to incorporate near-infrared (NIR) image. The framework introduces a residual dense ConvNeXt module specifically designed for dense feature extraction across different modalities. To further optimize the fusion process, a residual cross ConvNeXt module is developed to combine the extracted features. Therefore, maximizing the interaction between modalities and leading to a more informative fused image. To facilitate unsupervised training and ensure the accurate representation of combined modalities in the fused image, a loss function integrating frequency and gradient information is constructed. The proposed method is extensively validated through experiments on four distinct datasets, including both subjective evaluations and objective comparisons. The results demonstrate the proposed framework’s superiority over existing state-of-the-art image fusion algorithms, particularly highlighting its strong generalization capability in handling both LWIR+VIS and LWIR+NIR+VIS fusion tasks. Finally, the practical utility of the proposed method is further demonstrated through its application to object detection tasks.
期刊介绍:
The Journal covers the entire field of infrared physics and technology: theory, experiment, application, devices and instrumentation. Infrared'' is defined as covering the near, mid and far infrared (terahertz) regions from 0.75um (750nm) to 1mm (300GHz.) Submissions in the 300GHz to 100GHz region may be accepted at the editors discretion if their content is relevant to shorter wavelengths. Submissions must be primarily concerned with and directly relevant to this spectral region.
Its core topics can be summarized as the generation, propagation and detection, of infrared radiation; the associated optics, materials and devices; and its use in all fields of science, industry, engineering and medicine.
Infrared techniques occur in many different fields, notably spectroscopy and interferometry; material characterization and processing; atmospheric physics, astronomy and space research. Scientific aspects include lasers, quantum optics, quantum electronics, image processing and semiconductor physics. Some important applications are medical diagnostics and treatment, industrial inspection and environmental monitoring.