Illumination-aware divide-and-conquer network for improperly-exposed image enhancement

IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
{"title":"Illumination-aware divide-and-conquer network for improperly-exposed image enhancement","authors":"","doi":"10.1016/j.neunet.2024.106733","DOIUrl":null,"url":null,"abstract":"<div><p>Improperly-exposed images often have unsatisfactory visual characteristics like inadequate illumination, low contrast, and the loss of small structures and details. The mapping relationship from an improperly-exposed condition to a well-exposed one may vary significantly due to the presence of multiple exposure conditions. Consequently, the enhancement methods that do not pay specific attention to this issue tend to yield inconsistent results when applied to the same scene under different exposure conditions. In order to obtain consistent enhancement results for various exposures while restoring rich details, we propose an illumination-aware divide-and-conquer network (IDNet). Specifically, to address the challenge of directly learning a sophisticated nonlinear mapping from an improperly-exposed condition to a well-exposed one, we utilize the discrete wavelet transform (DWT) to decompose the image into the low-frequency (LF) component, which primarily captures brightness and contrast, and the high-frequency (HF) components that depict fine-scale structures. To mitigate the inconsistency in correction across various exposures, we extract a conditional feature from the input that represents illumination-related global information. This feature is then utilized to modulate the dynamic convolution weights, enabling precise correction of the LF component. Furthermore, as the co-located positions of LF and HF components are highly correlated, we create a mask to distill useful knowledge from the corrected LF component, and integrate it into the HF component to support the restoration of fine-scale details. Extensive experimental results demonstrate that the proposed IDNet is superior to several state-of-the-art enhancement methods on two datasets with multiple exposures.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608024006579","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Improperly-exposed images often have unsatisfactory visual characteristics like inadequate illumination, low contrast, and the loss of small structures and details. The mapping relationship from an improperly-exposed condition to a well-exposed one may vary significantly due to the presence of multiple exposure conditions. Consequently, the enhancement methods that do not pay specific attention to this issue tend to yield inconsistent results when applied to the same scene under different exposure conditions. In order to obtain consistent enhancement results for various exposures while restoring rich details, we propose an illumination-aware divide-and-conquer network (IDNet). Specifically, to address the challenge of directly learning a sophisticated nonlinear mapping from an improperly-exposed condition to a well-exposed one, we utilize the discrete wavelet transform (DWT) to decompose the image into the low-frequency (LF) component, which primarily captures brightness and contrast, and the high-frequency (HF) components that depict fine-scale structures. To mitigate the inconsistency in correction across various exposures, we extract a conditional feature from the input that represents illumination-related global information. This feature is then utilized to modulate the dynamic convolution weights, enabling precise correction of the LF component. Furthermore, as the co-located positions of LF and HF components are highly correlated, we create a mask to distill useful knowledge from the corrected LF component, and integrate it into the HF component to support the restoration of fine-scale details. Extensive experimental results demonstrate that the proposed IDNet is superior to several state-of-the-art enhancement methods on two datasets with multiple exposures.

用于不当曝光图像增强的照度感知分而治之网络
曝光不当的图像往往具有令人不满意的视觉特征,如照明不足、对比度低、小结构和细节丢失等。由于存在多种曝光条件,从曝光不当的条件到曝光良好的条件之间的映射关系可能会有很大差异。因此,没有特别关注这一问题的增强方法在不同曝光条件下应用于同一场景时,往往会产生不一致的结果。为了在不同曝光条件下获得一致的增强结果,同时还原丰富的细节,我们提出了一种光照感知分而治之网络(IDNet)。具体来说,为了解决直接学习从不当曝光条件到良好曝光条件的复杂非线性映射这一难题,我们利用离散小波变换(DWT)将图像分解为主要捕捉亮度和对比度的低频(LF)分量和描绘精细结构的高频(HF)分量。为了减少不同曝光下校正的不一致性,我们从输入中提取了一个条件特征,代表与光照相关的全局信息。然后利用这一特征来调节动态卷积权重,从而实现对低频成分的精确校正。此外,由于低频和高频分量的共定位位置高度相关,我们创建了一个掩码,从校正后的低频分量中提炼有用的知识,并将其整合到高频分量中,以支持精细细节的还原。广泛的实验结果表明,在两个多次曝光的数据集上,所提出的 IDNet 优于几种最先进的增强方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信