用于弱光图像增强的跨模态引导和改进增强Retinex网络

IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Tongshun Zhang , Pingping Liu , Mengen Cai , Xiaoyi Wang , Qiuzhan Zhou
{"title":"用于弱光图像增强的跨模态引导和改进增强Retinex网络","authors":"Tongshun Zhang ,&nbsp;Pingping Liu ,&nbsp;Mengen Cai ,&nbsp;Xiaoyi Wang ,&nbsp;Qiuzhan Zhou","doi":"10.1016/j.inffus.2025.103380","DOIUrl":null,"url":null,"abstract":"<div><div>The Retinex theory has long been a cornerstone in the field of low-light image enhancement, garnering significant attention. However, traditional Retinex-based methods often suffer from insufficient robustness to noise interference, necessitating the introduction of additional regularization terms or handcrafted priors to improve performance. These handcrafted priors and regularization-based approaches, however, lack adaptability and struggle to handle the complexity and variability of low-light environments effectively. To address these limitations, this paper proposes a Cross-Modal Guided and Refinement-Enhanced Retinex Network (CMRetinexNet) that leverages the adaptive guidance potential of auxiliary modalities and incorporates refinement modules to enhance Retinex decomposition and synthesis. Specifically: (a) Considering the characteristics of the reflectance component, we introduce auxiliary modal information to adaptively improve the accuracy of reflectance estimation. (b) For the illumination component, we design a reconstruction module that combines local and frequency-domain information, to iteratively enhance both regional and global illumination levels. (c) To address the inherent uncertainty in the element-wise multiplication of reflectance and illumination components during Retinex synthesis, we propose a synthesis and refinement module that effectively fuses illumination and reflectance components by leveraging cross-channel and spatial contextual information. Extensive experiments on multiple public datasets demonstrate that the proposed model achieves significant improvements in both qualitative and quantitative metrics compared to state-of-the-art methods, validating its effectiveness and superiority in low-light image enhancement.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"124 ","pages":"Article 103380"},"PeriodicalIF":14.7000,"publicationDate":"2025-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-modal guided and refinement-enhanced Retinex network for robust low-light image enhancement\",\"authors\":\"Tongshun Zhang ,&nbsp;Pingping Liu ,&nbsp;Mengen Cai ,&nbsp;Xiaoyi Wang ,&nbsp;Qiuzhan Zhou\",\"doi\":\"10.1016/j.inffus.2025.103380\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The Retinex theory has long been a cornerstone in the field of low-light image enhancement, garnering significant attention. However, traditional Retinex-based methods often suffer from insufficient robustness to noise interference, necessitating the introduction of additional regularization terms or handcrafted priors to improve performance. These handcrafted priors and regularization-based approaches, however, lack adaptability and struggle to handle the complexity and variability of low-light environments effectively. To address these limitations, this paper proposes a Cross-Modal Guided and Refinement-Enhanced Retinex Network (CMRetinexNet) that leverages the adaptive guidance potential of auxiliary modalities and incorporates refinement modules to enhance Retinex decomposition and synthesis. Specifically: (a) Considering the characteristics of the reflectance component, we introduce auxiliary modal information to adaptively improve the accuracy of reflectance estimation. (b) For the illumination component, we design a reconstruction module that combines local and frequency-domain information, to iteratively enhance both regional and global illumination levels. (c) To address the inherent uncertainty in the element-wise multiplication of reflectance and illumination components during Retinex synthesis, we propose a synthesis and refinement module that effectively fuses illumination and reflectance components by leveraging cross-channel and spatial contextual information. Extensive experiments on multiple public datasets demonstrate that the proposed model achieves significant improvements in both qualitative and quantitative metrics compared to state-of-the-art methods, validating its effectiveness and superiority in low-light image enhancement.</div></div>\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"124 \",\"pages\":\"Article 103380\"},\"PeriodicalIF\":14.7000,\"publicationDate\":\"2025-06-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1566253525004531\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525004531","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

视网膜理论长期以来一直是弱光图像增强领域的基石,得到了广泛的关注。然而,传统的基于维甲酸的方法对噪声干扰的鲁棒性不足,需要引入额外的正则化项或手工制作的先验来提高性能。然而,这些手工制作的先验和基于正则化的方法缺乏适应性,难以有效地处理低光环境的复杂性和可变性。为了解决这些限制,本文提出了一个跨模态引导和细化增强的Retinex网络(CMRetinexNet),该网络利用辅助模态的自适应引导潜力,并结合细化模块来增强Retinex的分解和合成。具体来说:(a)考虑到反射率分量的特性,引入辅助模态信息,自适应提高反射率估计的精度。(b)对于光照分量,我们设计了一个结合局部和频域信息的重构模块,迭代增强区域和全局光照水平。(c)为了解决在Retinex合成过程中反射率和照明分量的元素乘法中固有的不确定性,我们提出了一个综合和细化模块,该模块通过利用跨通道和空间上下文信息有效地融合照明和反射率分量。在多个公共数据集上进行的大量实验表明,与最先进的方法相比,所提出的模型在定性和定量指标方面都取得了显着改进,验证了其在弱光图像增强方面的有效性和优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Cross-modal guided and refinement-enhanced Retinex network for robust low-light image enhancement

Cross-modal guided and refinement-enhanced Retinex network for robust low-light image enhancement
The Retinex theory has long been a cornerstone in the field of low-light image enhancement, garnering significant attention. However, traditional Retinex-based methods often suffer from insufficient robustness to noise interference, necessitating the introduction of additional regularization terms or handcrafted priors to improve performance. These handcrafted priors and regularization-based approaches, however, lack adaptability and struggle to handle the complexity and variability of low-light environments effectively. To address these limitations, this paper proposes a Cross-Modal Guided and Refinement-Enhanced Retinex Network (CMRetinexNet) that leverages the adaptive guidance potential of auxiliary modalities and incorporates refinement modules to enhance Retinex decomposition and synthesis. Specifically: (a) Considering the characteristics of the reflectance component, we introduce auxiliary modal information to adaptively improve the accuracy of reflectance estimation. (b) For the illumination component, we design a reconstruction module that combines local and frequency-domain information, to iteratively enhance both regional and global illumination levels. (c) To address the inherent uncertainty in the element-wise multiplication of reflectance and illumination components during Retinex synthesis, we propose a synthesis and refinement module that effectively fuses illumination and reflectance components by leveraging cross-channel and spatial contextual information. Extensive experiments on multiple public datasets demonstrate that the proposed model achieves significant improvements in both qualitative and quantitative metrics compared to state-of-the-art methods, validating its effectiveness and superiority in low-light image enhancement.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信