{"title":"无isp多光谱融合成像极低光增强摄影","authors":"Yilan Nan, Qican Zhang, Tingdong Kou, Tianyue He, Cui Huang, Cuizhen Lu, Junfei Shen","doi":"10.1016/j.inffus.2025.103400","DOIUrl":null,"url":null,"abstract":"<div><div>Achieving high-quality imaging under extremely low-light conditions is crucial for autonomous driving and night surveillance applications. Traditional approaches predominantly focus on the post-processing of degraded RGB data, which struggle to effectively mitigate noise in very low-light situations with limited input information and significant noise interference. In this study, a computational multi-spectral fusion imaging framework is proposed to enhance low-light images by encoding a broader spectrum of optical source information into the imaging pipeline. An end-to-end spectral fusion network (SPFNet), consisting of an encoder for the automatic extraction of scene features and decoder for channel fusion, is designed to integrate spectral fusion with image denoising. Utilizing the novel Multi_Conv module, a diverse range of spectral features are extracted from multi-spectral raw data, providing multi-scale cross-references for noise suppression, thereby facilitating high-quality image fusion. A pilot optical system was built to capture a real-scene multi-spectral-RGB dataset under illuminance conditions below 0.01 lx per spectrum. Experimental results confirm that the proposed method significantly outperforms traditional RGB imaging techniques, demonstrating an average improvement of over 7.87 dB in peak signal-to-noise ratio (PSNR) and 0.25 in structural similarity index (SSIM). Comprehensive ablation and contrast experiments were conducted to verify that the proposed model achieved the best performance in terms of detail reconstruction and color fidelity. Eschewing the need for a cumbersome traditional image signal processing (ISP) pipeline and strict experimental constraints, the proposed framework offers a novel and viable solution for extreme low-light imaging applications, including portable photography, space exploration, remote sensing, and deep-sea exploration.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"124 ","pages":"Article 103400"},"PeriodicalIF":14.7000,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ISP-free multi-spectrum fused imaging for extremely low-light enhanced photography\",\"authors\":\"Yilan Nan, Qican Zhang, Tingdong Kou, Tianyue He, Cui Huang, Cuizhen Lu, Junfei Shen\",\"doi\":\"10.1016/j.inffus.2025.103400\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Achieving high-quality imaging under extremely low-light conditions is crucial for autonomous driving and night surveillance applications. Traditional approaches predominantly focus on the post-processing of degraded RGB data, which struggle to effectively mitigate noise in very low-light situations with limited input information and significant noise interference. In this study, a computational multi-spectral fusion imaging framework is proposed to enhance low-light images by encoding a broader spectrum of optical source information into the imaging pipeline. An end-to-end spectral fusion network (SPFNet), consisting of an encoder for the automatic extraction of scene features and decoder for channel fusion, is designed to integrate spectral fusion with image denoising. Utilizing the novel Multi_Conv module, a diverse range of spectral features are extracted from multi-spectral raw data, providing multi-scale cross-references for noise suppression, thereby facilitating high-quality image fusion. A pilot optical system was built to capture a real-scene multi-spectral-RGB dataset under illuminance conditions below 0.01 lx per spectrum. Experimental results confirm that the proposed method significantly outperforms traditional RGB imaging techniques, demonstrating an average improvement of over 7.87 dB in peak signal-to-noise ratio (PSNR) and 0.25 in structural similarity index (SSIM). Comprehensive ablation and contrast experiments were conducted to verify that the proposed model achieved the best performance in terms of detail reconstruction and color fidelity. Eschewing the need for a cumbersome traditional image signal processing (ISP) pipeline and strict experimental constraints, the proposed framework offers a novel and viable solution for extreme low-light imaging applications, including portable photography, space exploration, remote sensing, and deep-sea exploration.</div></div>\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"124 \",\"pages\":\"Article 103400\"},\"PeriodicalIF\":14.7000,\"publicationDate\":\"2025-06-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1566253525004737\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525004737","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
ISP-free multi-spectrum fused imaging for extremely low-light enhanced photography
Achieving high-quality imaging under extremely low-light conditions is crucial for autonomous driving and night surveillance applications. Traditional approaches predominantly focus on the post-processing of degraded RGB data, which struggle to effectively mitigate noise in very low-light situations with limited input information and significant noise interference. In this study, a computational multi-spectral fusion imaging framework is proposed to enhance low-light images by encoding a broader spectrum of optical source information into the imaging pipeline. An end-to-end spectral fusion network (SPFNet), consisting of an encoder for the automatic extraction of scene features and decoder for channel fusion, is designed to integrate spectral fusion with image denoising. Utilizing the novel Multi_Conv module, a diverse range of spectral features are extracted from multi-spectral raw data, providing multi-scale cross-references for noise suppression, thereby facilitating high-quality image fusion. A pilot optical system was built to capture a real-scene multi-spectral-RGB dataset under illuminance conditions below 0.01 lx per spectrum. Experimental results confirm that the proposed method significantly outperforms traditional RGB imaging techniques, demonstrating an average improvement of over 7.87 dB in peak signal-to-noise ratio (PSNR) and 0.25 in structural similarity index (SSIM). Comprehensive ablation and contrast experiments were conducted to verify that the proposed model achieved the best performance in terms of detail reconstruction and color fidelity. Eschewing the need for a cumbersome traditional image signal processing (ISP) pipeline and strict experimental constraints, the proposed framework offers a novel and viable solution for extreme low-light imaging applications, including portable photography, space exploration, remote sensing, and deep-sea exploration.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.