{"title":"Unsupervised Low-Light Image Enhancement With Self-Paced Learning","authors":"Yu Luo;Xuanrong Chen;Jie Ling;Chao Huang;Wei Zhou;Guanghui Yue","doi":"10.1109/TMM.2024.3521752","DOIUrl":null,"url":null,"abstract":"Low-light image enhancement (LIE) aims to restore images taken under poor lighting conditions, thereby extracting more information and details to robustly support subsequent visual tasks. While past deep learning (DL)-based techniques have achieved certain restoration effects, these existing methods treat all samples equally, ignoring the fact that difficult samples may be detrimental to the network's convergence at the initial training stages of network training. In this paper, we introduce a self-paced learning (SPL)-based LIE method named SPNet, which consists of three key components: the feature extraction module (FEM), the low-light image decomposition module (LIDM), and a pre-trained denoise module. Specifically, for a given low-light image, we first input the image, its pseudo-reference image, and its histogram-equalized version into the FEM to obtain preliminary features. Second, to avoid ambiguities during the early stages of training, these features are then adaptively fused via an SPL strategy and processed for retinex decomposition via LIDM. Third, we enhance the network performance by constraining the gradient prior relationship between the illumination components of the images. Finally, a pre-trained denoise module reduces noise inherent in LIE. Extensive experiments on nine public datasets reveal that the proposed SPNet outperforms eight state-of-the-art DL-based methods in both qualitative and quantitative evaluations and outperforms three conventional methods in quantitative assessments.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"1808-1820"},"PeriodicalIF":8.4000,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Multimedia","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10814698/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Low-light image enhancement (LIE) aims to restore images taken under poor lighting conditions, thereby extracting more information and details to robustly support subsequent visual tasks. While past deep learning (DL)-based techniques have achieved certain restoration effects, these existing methods treat all samples equally, ignoring the fact that difficult samples may be detrimental to the network's convergence at the initial training stages of network training. In this paper, we introduce a self-paced learning (SPL)-based LIE method named SPNet, which consists of three key components: the feature extraction module (FEM), the low-light image decomposition module (LIDM), and a pre-trained denoise module. Specifically, for a given low-light image, we first input the image, its pseudo-reference image, and its histogram-equalized version into the FEM to obtain preliminary features. Second, to avoid ambiguities during the early stages of training, these features are then adaptively fused via an SPL strategy and processed for retinex decomposition via LIDM. Third, we enhance the network performance by constraining the gradient prior relationship between the illumination components of the images. Finally, a pre-trained denoise module reduces noise inherent in LIE. Extensive experiments on nine public datasets reveal that the proposed SPNet outperforms eight state-of-the-art DL-based methods in both qualitative and quantitative evaluations and outperforms three conventional methods in quantitative assessments.
期刊介绍:
The IEEE Transactions on Multimedia delves into diverse aspects of multimedia technology and applications, covering circuits, networking, signal processing, systems, software, and systems integration. The scope aligns with the Fields of Interest of the sponsors, ensuring a comprehensive exploration of research in multimedia.