{"title":"Dynamic Fusion for Generating High-Quality Labels in Low-Light Image Enhancement","authors":"Zhuo-Ming Du;Hong-An Li;Fei-long Han","doi":"10.1109/LSP.2025.3575608","DOIUrl":null,"url":null,"abstract":"Generating high-quality labels is crucial for self-supervised learning in low-light conditions, where traditional enhancement methods often struggle to balance detail enhancement and color fidelity. This paper presents a traditional image fusion approach that dynamically combines Multi-Scale Retinex (MSR) and Adaptive Histogram Equalization (AHE) outputs with the original image using an adaptive weighting strategy. The primary goal is not to compete with state-of-the-art deep learning-based enhancement methods but to produce intermediate images that can serve as effective labels for training self-supervised models without requiring ground-truth datasets. By dynamically fusing MSR and AHE outputs with the original image using adaptive brightness and color weights, the method improves structural integrity while enhancing brightness and color consistency. Experiments on standard low-light datasets demonstrate significant improvements in PSNR and SSIM compared to traditional enhancement methods. However, a visual analysis of the generated labels reveals differences in color saturation when compared to ground truth, providing insights into designing a suitable loss function for future self-supervised learning applications. It is important to note that this work does not include experiments or methods related to self-supervised learning itself; instead, it focuses on preparing high-quality labels for such approaches. Additionally, our method strikes a balance between computational efficiency and visual quality, making it suitable for real-time applications and paving the way for more robust and versatile learning frameworks.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"2324-2328"},"PeriodicalIF":3.2000,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11020740/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Generating high-quality labels is crucial for self-supervised learning in low-light conditions, where traditional enhancement methods often struggle to balance detail enhancement and color fidelity. This paper presents a traditional image fusion approach that dynamically combines Multi-Scale Retinex (MSR) and Adaptive Histogram Equalization (AHE) outputs with the original image using an adaptive weighting strategy. The primary goal is not to compete with state-of-the-art deep learning-based enhancement methods but to produce intermediate images that can serve as effective labels for training self-supervised models without requiring ground-truth datasets. By dynamically fusing MSR and AHE outputs with the original image using adaptive brightness and color weights, the method improves structural integrity while enhancing brightness and color consistency. Experiments on standard low-light datasets demonstrate significant improvements in PSNR and SSIM compared to traditional enhancement methods. However, a visual analysis of the generated labels reveals differences in color saturation when compared to ground truth, providing insights into designing a suitable loss function for future self-supervised learning applications. It is important to note that this work does not include experiments or methods related to self-supervised learning itself; instead, it focuses on preparing high-quality labels for such approaches. Additionally, our method strikes a balance between computational efficiency and visual quality, making it suitable for real-time applications and paving the way for more robust and versatile learning frameworks.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.