ClassLIE: Structure- and Illumination-Adaptive Classification for Low-Light Image Enhancement

Zixiang Wei;Yiting Wang;Lichao Sun;Athanasios V. Vasilakos;Lin Wang
{"title":"ClassLIE: Structure- and Illumination-Adaptive Classification for Low-Light Image Enhancement","authors":"Zixiang Wei;Yiting Wang;Lichao Sun;Athanasios V. Vasilakos;Lin Wang","doi":"10.1109/TAI.2024.3405405","DOIUrl":null,"url":null,"abstract":"Low-light images often suffer from limited visibility and multiple types of degradation, rendering low-light image enhancement (LIE) a nontrivial task. Some endeavors have been made to enhance low-light images using convolutional neural networks (CNNs). However, they have low efficiency in learning the structural information and diverse illumination levels at the local regions of an image. Consequently, the enhanced results are affected by unexpected artifacts, such as unbalanced exposure, blur, and color bias. This article proposes a novel framework, called ClassLIE, that combines the potential of CNNs and transformers. It classifies and adaptively learns the structural and illumination information from the low-light images in a holistic and regional manner, thus showing better enhancement performance. Our framework first employs a structure and illumination classification (SIC) module to learn the degradation information adaptively. In SIC, we decompose an input image into an illumination map and a reflectance map. A class prediction block is then designed to classify the degradation information by calculating the structure similarity scores on the reflectance map and mean square error (MSE) on the illumination map. As such, each input image can be divided into patches with three enhancement difficulty levels. Then, a feature learning and fusion (FLF) module is proposed to adaptively learn the feature information with CNNs for different enhancement difficulty levels while learning the long-range dependencies for the patches in a holistic manner. Experiments on five benchmark datasets consistently show our ClassLIE achieves new state-of-the-art performance, with 25.74 peak signal-to-noise ratio (PSNR) and 0.92 structural similarity (SSIM) on the LOw-Light (LOL) dataset.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 9","pages":"4765-4775"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10539917/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Low-light images often suffer from limited visibility and multiple types of degradation, rendering low-light image enhancement (LIE) a nontrivial task. Some endeavors have been made to enhance low-light images using convolutional neural networks (CNNs). However, they have low efficiency in learning the structural information and diverse illumination levels at the local regions of an image. Consequently, the enhanced results are affected by unexpected artifacts, such as unbalanced exposure, blur, and color bias. This article proposes a novel framework, called ClassLIE, that combines the potential of CNNs and transformers. It classifies and adaptively learns the structural and illumination information from the low-light images in a holistic and regional manner, thus showing better enhancement performance. Our framework first employs a structure and illumination classification (SIC) module to learn the degradation information adaptively. In SIC, we decompose an input image into an illumination map and a reflectance map. A class prediction block is then designed to classify the degradation information by calculating the structure similarity scores on the reflectance map and mean square error (MSE) on the illumination map. As such, each input image can be divided into patches with three enhancement difficulty levels. Then, a feature learning and fusion (FLF) module is proposed to adaptively learn the feature information with CNNs for different enhancement difficulty levels while learning the long-range dependencies for the patches in a holistic manner. Experiments on five benchmark datasets consistently show our ClassLIE achieves new state-of-the-art performance, with 25.74 peak signal-to-noise ratio (PSNR) and 0.92 structural similarity (SSIM) on the LOw-Light (LOL) dataset.
ClassLIE:用于低照度图像增强的结构和光照自适应分类法
低照度图像通常能见度有限,而且存在多种劣化情况,因此低照度图像增强(LIE)是一项非同小可的任务。人们已经尝试使用卷积神经网络(CNN)来增强低照度图像。然而,它们在学习图像局部区域的结构信息和不同光照度方面效率较低。因此,增强后的结果会受到意外伪影的影响,如曝光不平衡、模糊和色彩偏差。本文提出了一种名为 ClassLIE 的新框架,它结合了 CNN 和变换器的潜力。它以整体和区域的方式对低照度图像的结构和光照信息进行分类和自适应学习,从而显示出更好的增强性能。我们的框架首先采用结构和光照分类(SIC)模块来自适应学习退化信息。在 SIC 中,我们将输入图像分解为光照图和反射图。然后设计一个类别预测块,通过计算反射图上的结构相似度得分和光照图上的均方误差 (MSE) 来对退化信息进行分类。因此,每幅输入图像可被划分为三个增强难度级别的斑块。然后,我们提出了一个特征学习和融合(FLF)模块,利用 CNN 自适应地学习不同增强难度级别的特征信息,同时以整体方式学习补丁的长程依赖关系。在五个基准数据集上的实验一致表明,我们的 ClassLIE 达到了新的一流性能,在 LOw-Light (LOL) 数据集上的峰值信噪比(PSNR)为 25.74,结构相似度(SSIM)为 0.92。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信