Dynamic Fusion for Generating High-Quality Labels in Low-Light Image Enhancement

IF 3.2 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Zhuo-Ming Du;Hong-An Li;Fei-long Han
{"title":"Dynamic Fusion for Generating High-Quality Labels in Low-Light Image Enhancement","authors":"Zhuo-Ming Du;Hong-An Li;Fei-long Han","doi":"10.1109/LSP.2025.3575608","DOIUrl":null,"url":null,"abstract":"Generating high-quality labels is crucial for self-supervised learning in low-light conditions, where traditional enhancement methods often struggle to balance detail enhancement and color fidelity. This paper presents a traditional image fusion approach that dynamically combines Multi-Scale Retinex (MSR) and Adaptive Histogram Equalization (AHE) outputs with the original image using an adaptive weighting strategy. The primary goal is not to compete with state-of-the-art deep learning-based enhancement methods but to produce intermediate images that can serve as effective labels for training self-supervised models without requiring ground-truth datasets. By dynamically fusing MSR and AHE outputs with the original image using adaptive brightness and color weights, the method improves structural integrity while enhancing brightness and color consistency. Experiments on standard low-light datasets demonstrate significant improvements in PSNR and SSIM compared to traditional enhancement methods. However, a visual analysis of the generated labels reveals differences in color saturation when compared to ground truth, providing insights into designing a suitable loss function for future self-supervised learning applications. It is important to note that this work does not include experiments or methods related to self-supervised learning itself; instead, it focuses on preparing high-quality labels for such approaches. Additionally, our method strikes a balance between computational efficiency and visual quality, making it suitable for real-time applications and paving the way for more robust and versatile learning frameworks.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"2324-2328"},"PeriodicalIF":3.2000,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11020740/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Generating high-quality labels is crucial for self-supervised learning in low-light conditions, where traditional enhancement methods often struggle to balance detail enhancement and color fidelity. This paper presents a traditional image fusion approach that dynamically combines Multi-Scale Retinex (MSR) and Adaptive Histogram Equalization (AHE) outputs with the original image using an adaptive weighting strategy. The primary goal is not to compete with state-of-the-art deep learning-based enhancement methods but to produce intermediate images that can serve as effective labels for training self-supervised models without requiring ground-truth datasets. By dynamically fusing MSR and AHE outputs with the original image using adaptive brightness and color weights, the method improves structural integrity while enhancing brightness and color consistency. Experiments on standard low-light datasets demonstrate significant improvements in PSNR and SSIM compared to traditional enhancement methods. However, a visual analysis of the generated labels reveals differences in color saturation when compared to ground truth, providing insights into designing a suitable loss function for future self-supervised learning applications. It is important to note that this work does not include experiments or methods related to self-supervised learning itself; instead, it focuses on preparing high-quality labels for such approaches. Additionally, our method strikes a balance between computational efficiency and visual quality, making it suitable for real-time applications and paving the way for more robust and versatile learning frameworks.
低光图像增强中生成高质量标签的动态融合
生成高质量的标签对于弱光条件下的自我监督学习至关重要,传统的增强方法通常难以平衡细节增强和色彩保真度。本文提出了一种传统的图像融合方法,该方法使用自适应加权策略将多尺度Retinex (MSR)和自适应直方图均衡化(AHE)输出与原始图像动态结合。主要目标不是与最先进的基于深度学习的增强方法竞争,而是生成中间图像,这些图像可以作为训练自监督模型的有效标签,而不需要真实数据集。该方法通过自适应亮度和颜色权重将MSR和AHE输出与原始图像动态融合,在增强亮度和颜色一致性的同时提高了结构完整性。在标准弱光数据集上的实验表明,与传统增强方法相比,该方法在PSNR和SSIM方面有了显著提高。然而,对生成标签的可视化分析揭示了与基本事实相比颜色饱和度的差异,为设计适合未来自监督学习应用的损失函数提供了见解。值得注意的是,这项工作不包括与自我监督学习本身相关的实验或方法;相反,它侧重于为这些方法准备高质量的标签。此外,我们的方法在计算效率和视觉质量之间取得了平衡,使其适合于实时应用,并为更健壮和通用的学习框架铺平了道路。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Signal Processing Letters
IEEE Signal Processing Letters 工程技术-工程:电子与电气
CiteScore
7.40
自引率
12.80%
发文量
339
审稿时长
2.8 months
期刊介绍: The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信