{"title":"Adaptive Luminance Enhancement and High-Fidelity Color Correction for Low-Light Image Enhancement","authors":"Yuzhen Niu;Fusheng Li;Yuezhou Li;Siling Chen;Yuzhong Chen","doi":"10.1109/TCI.2025.3564112","DOIUrl":null,"url":null,"abstract":"It is a challenging task to obtain high-quality images in low-light scenarios. While existing low-light image enhancement methods learn the mapping from low-light to clear images, such a straightforward approach lacks the targeted design for real-world scenarios, hampering their practical utility. As a result, issues such as overexposure and color distortion are likely to arise when processing images in uneven luminance or extreme darkness. To address these issues, we propose an adaptive luminance enhancement and high-fidelity color correction network (LCNet), which adopts a strategy of enhancing luminance first and then correcting color. Specifically, in the adaptive luminance enhancement stage, we design a multi-stage dual attention residual module (MDARM), which incorporates parallel spatial and channel attention mechanisms within residual blocks. This module extracts luminance prior from the low-light image to adaptively enhance luminance, while suppressing overexposure in areas with sufficient luminance. In the high-fidelity color correction stage, we design a progressive multi-scale feature fusion module (PMFFM) that combines progressively stage-wise multi-scale feature fusion with long/short skip connections, enabling thorough interaction between features at different scales across stages. This module extracts and fuses color features with varying receptive fields to ensure accurate and consistent color correction. Furthermore, we introduce a multi-color-space loss to effectively constrain the color correction. These two stages together produce high-quality images with appropriate luminance and high-fidelity color. Extensive experiments on both low-level and high-level tasks demonstrate that our LCNet outperforms state-of-the-art methods and achieves superior performance for low-light image enhancement in real-world scenarios.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"732-747"},"PeriodicalIF":4.8000,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computational Imaging","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10976393/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
It is a challenging task to obtain high-quality images in low-light scenarios. While existing low-light image enhancement methods learn the mapping from low-light to clear images, such a straightforward approach lacks the targeted design for real-world scenarios, hampering their practical utility. As a result, issues such as overexposure and color distortion are likely to arise when processing images in uneven luminance or extreme darkness. To address these issues, we propose an adaptive luminance enhancement and high-fidelity color correction network (LCNet), which adopts a strategy of enhancing luminance first and then correcting color. Specifically, in the adaptive luminance enhancement stage, we design a multi-stage dual attention residual module (MDARM), which incorporates parallel spatial and channel attention mechanisms within residual blocks. This module extracts luminance prior from the low-light image to adaptively enhance luminance, while suppressing overexposure in areas with sufficient luminance. In the high-fidelity color correction stage, we design a progressive multi-scale feature fusion module (PMFFM) that combines progressively stage-wise multi-scale feature fusion with long/short skip connections, enabling thorough interaction between features at different scales across stages. This module extracts and fuses color features with varying receptive fields to ensure accurate and consistent color correction. Furthermore, we introduce a multi-color-space loss to effectively constrain the color correction. These two stages together produce high-quality images with appropriate luminance and high-fidelity color. Extensive experiments on both low-level and high-level tasks demonstrate that our LCNet outperforms state-of-the-art methods and achieves superior performance for low-light image enhancement in real-world scenarios.
期刊介绍:
The IEEE Transactions on Computational Imaging will publish articles where computation plays an integral role in the image formation process. Papers will cover all areas of computational imaging ranging from fundamental theoretical methods to the latest innovative computational imaging system designs. Topics of interest will include advanced algorithms and mathematical techniques, model-based data inversion, methods for image and signal recovery from sparse and incomplete data, techniques for non-traditional sensing of image data, methods for dynamic information acquisition and extraction from imaging sensors, software and hardware for efficient computation in imaging systems, and highly novel imaging system design.