Residual Quotient Learning for Zero-Reference Low-Light Image Enhancement

Chao Xie;Linfeng Fei;Huanjie Tao;Yaocong Hu;Wei Zhou;Jiun Tian Hoe;Weipeng Hu;Yap-Peng Tan
{"title":"Residual Quotient Learning for Zero-Reference Low-Light Image Enhancement","authors":"Chao Xie;Linfeng Fei;Huanjie Tao;Yaocong Hu;Wei Zhou;Jiun Tian Hoe;Weipeng Hu;Yap-Peng Tan","doi":"10.1109/TIP.2024.3519997","DOIUrl":null,"url":null,"abstract":"Recently, neural networks have become the dominant approach to low-light image enhancement (LLIE), with at least one-third of them adopting a Retinex-related architecture. However, through in-depth analysis, we contend that this most widely accepted LLIE structure is suboptimal, particularly when addressing the non-uniform illumination commonly observed in natural images. In this paper, we present a novel variant learning framework, termed residual quotient learning, to substantially alleviate this issue. Instead of following the existing Retinex-related decomposition-enhancement-reconstruction process, our basic idea is to explicitly reformulate the light enhancement task as adaptively predicting the latent quotient with reference to the original low-light input using a residual learning fashion. By leveraging the proposed residual quotient learning, we develop a lightweight yet effective network called ResQ-Net. This network features enhanced non-uniform illumination modeling capabilities, making it more suitable for real-world LLIE tasks. Moreover, due to its well-designed structure and reference-free loss function, ResQ-Net is flexible in training as it allows for zero-reference optimization, which further enhances the generalization and adaptability of our entire framework. Extensive experiments on various benchmark datasets demonstrate the merits and effectiveness of the proposed residual quotient learning, and our trained ResQ-Net outperforms state-of-the-art methods both qualitatively and quantitatively. Furthermore, a practical application in dark face detection is explored, and the preliminary results confirm the potential and feasibility of our method in real-world scenarios.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"365-378"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10815017/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recently, neural networks have become the dominant approach to low-light image enhancement (LLIE), with at least one-third of them adopting a Retinex-related architecture. However, through in-depth analysis, we contend that this most widely accepted LLIE structure is suboptimal, particularly when addressing the non-uniform illumination commonly observed in natural images. In this paper, we present a novel variant learning framework, termed residual quotient learning, to substantially alleviate this issue. Instead of following the existing Retinex-related decomposition-enhancement-reconstruction process, our basic idea is to explicitly reformulate the light enhancement task as adaptively predicting the latent quotient with reference to the original low-light input using a residual learning fashion. By leveraging the proposed residual quotient learning, we develop a lightweight yet effective network called ResQ-Net. This network features enhanced non-uniform illumination modeling capabilities, making it more suitable for real-world LLIE tasks. Moreover, due to its well-designed structure and reference-free loss function, ResQ-Net is flexible in training as it allows for zero-reference optimization, which further enhances the generalization and adaptability of our entire framework. Extensive experiments on various benchmark datasets demonstrate the merits and effectiveness of the proposed residual quotient learning, and our trained ResQ-Net outperforms state-of-the-art methods both qualitatively and quantitatively. Furthermore, a practical application in dark face detection is explored, and the preliminary results confirm the potential and feasibility of our method in real-world scenarios.
残差商学习的零参考微光图像增强
最近,神经网络已经成为低光图像增强(LLIE)的主要方法,其中至少有三分之一采用了与视网膜相关的架构。然而,通过深入分析,我们认为这种最广泛接受的LLIE结构是次优的,特别是在处理自然图像中常见的不均匀照明时。在本文中,我们提出了一种新的变体学习框架,称为残差商学习,以大大缓解这一问题。我们的基本思路是明确地将光增强任务重新表述为参考原始弱光输入,使用残差学习方式自适应地预测潜在商,而不是遵循现有的与维甲酸相关的分解-增强-重建过程。通过利用提出的残差商学习,我们开发了一个轻量级但有效的网络,称为ResQ-Net。该网络具有增强的非均匀照明建模能力,使其更适合现实世界的LLIE任务。此外,由于其设计良好的结构和无参考损失函数,ResQ-Net在训练中具有灵活性,可以进行零参考优化,这进一步增强了我们整个框架的泛化和适应性。在各种基准数据集上的大量实验证明了所提出的残差商学习的优点和有效性,并且我们训练的ResQ-Net在定性和定量上都优于最先进的方法。此外,本文还探讨了在暗人脸检测中的实际应用,初步结果证实了该方法在现实场景中的潜力和可行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信