Tongshun Zhang , Pingping Liu , Mengen Cai , Xiaoyi Wang , Qiuzhan Zhou
{"title":"用于弱光图像增强的跨模态引导和改进增强Retinex网络","authors":"Tongshun Zhang , Pingping Liu , Mengen Cai , Xiaoyi Wang , Qiuzhan Zhou","doi":"10.1016/j.inffus.2025.103380","DOIUrl":null,"url":null,"abstract":"<div><div>The Retinex theory has long been a cornerstone in the field of low-light image enhancement, garnering significant attention. However, traditional Retinex-based methods often suffer from insufficient robustness to noise interference, necessitating the introduction of additional regularization terms or handcrafted priors to improve performance. These handcrafted priors and regularization-based approaches, however, lack adaptability and struggle to handle the complexity and variability of low-light environments effectively. To address these limitations, this paper proposes a Cross-Modal Guided and Refinement-Enhanced Retinex Network (CMRetinexNet) that leverages the adaptive guidance potential of auxiliary modalities and incorporates refinement modules to enhance Retinex decomposition and synthesis. Specifically: (a) Considering the characteristics of the reflectance component, we introduce auxiliary modal information to adaptively improve the accuracy of reflectance estimation. (b) For the illumination component, we design a reconstruction module that combines local and frequency-domain information, to iteratively enhance both regional and global illumination levels. (c) To address the inherent uncertainty in the element-wise multiplication of reflectance and illumination components during Retinex synthesis, we propose a synthesis and refinement module that effectively fuses illumination and reflectance components by leveraging cross-channel and spatial contextual information. Extensive experiments on multiple public datasets demonstrate that the proposed model achieves significant improvements in both qualitative and quantitative metrics compared to state-of-the-art methods, validating its effectiveness and superiority in low-light image enhancement.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"124 ","pages":"Article 103380"},"PeriodicalIF":14.7000,"publicationDate":"2025-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-modal guided and refinement-enhanced Retinex network for robust low-light image enhancement\",\"authors\":\"Tongshun Zhang , Pingping Liu , Mengen Cai , Xiaoyi Wang , Qiuzhan Zhou\",\"doi\":\"10.1016/j.inffus.2025.103380\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The Retinex theory has long been a cornerstone in the field of low-light image enhancement, garnering significant attention. However, traditional Retinex-based methods often suffer from insufficient robustness to noise interference, necessitating the introduction of additional regularization terms or handcrafted priors to improve performance. These handcrafted priors and regularization-based approaches, however, lack adaptability and struggle to handle the complexity and variability of low-light environments effectively. To address these limitations, this paper proposes a Cross-Modal Guided and Refinement-Enhanced Retinex Network (CMRetinexNet) that leverages the adaptive guidance potential of auxiliary modalities and incorporates refinement modules to enhance Retinex decomposition and synthesis. Specifically: (a) Considering the characteristics of the reflectance component, we introduce auxiliary modal information to adaptively improve the accuracy of reflectance estimation. (b) For the illumination component, we design a reconstruction module that combines local and frequency-domain information, to iteratively enhance both regional and global illumination levels. (c) To address the inherent uncertainty in the element-wise multiplication of reflectance and illumination components during Retinex synthesis, we propose a synthesis and refinement module that effectively fuses illumination and reflectance components by leveraging cross-channel and spatial contextual information. Extensive experiments on multiple public datasets demonstrate that the proposed model achieves significant improvements in both qualitative and quantitative metrics compared to state-of-the-art methods, validating its effectiveness and superiority in low-light image enhancement.</div></div>\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"124 \",\"pages\":\"Article 103380\"},\"PeriodicalIF\":14.7000,\"publicationDate\":\"2025-06-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1566253525004531\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525004531","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Cross-modal guided and refinement-enhanced Retinex network for robust low-light image enhancement
The Retinex theory has long been a cornerstone in the field of low-light image enhancement, garnering significant attention. However, traditional Retinex-based methods often suffer from insufficient robustness to noise interference, necessitating the introduction of additional regularization terms or handcrafted priors to improve performance. These handcrafted priors and regularization-based approaches, however, lack adaptability and struggle to handle the complexity and variability of low-light environments effectively. To address these limitations, this paper proposes a Cross-Modal Guided and Refinement-Enhanced Retinex Network (CMRetinexNet) that leverages the adaptive guidance potential of auxiliary modalities and incorporates refinement modules to enhance Retinex decomposition and synthesis. Specifically: (a) Considering the characteristics of the reflectance component, we introduce auxiliary modal information to adaptively improve the accuracy of reflectance estimation. (b) For the illumination component, we design a reconstruction module that combines local and frequency-domain information, to iteratively enhance both regional and global illumination levels. (c) To address the inherent uncertainty in the element-wise multiplication of reflectance and illumination components during Retinex synthesis, we propose a synthesis and refinement module that effectively fuses illumination and reflectance components by leveraging cross-channel and spatial contextual information. Extensive experiments on multiple public datasets demonstrate that the proposed model achieves significant improvements in both qualitative and quantitative metrics compared to state-of-the-art methods, validating its effectiveness and superiority in low-light image enhancement.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.