Towards a Flexible Semantic Guided Model for Single Image Enhancement and Restoration.

Yuhui Wu, Guoqing Wang, Shaochong Liu, Yang Yang, Wei Li, Xiongxin Tang, Shuhang Gu, Chongyi Li, Heng Tao Shen
{"title":"Towards a Flexible Semantic Guided Model for Single Image Enhancement and Restoration.","authors":"Yuhui Wu, Guoqing Wang, Shaochong Liu, Yang Yang, Wei Li, Xiongxin Tang, Shuhang Gu, Chongyi Li, Heng Tao Shen","doi":"10.1109/TPAMI.2024.3432308","DOIUrl":null,"url":null,"abstract":"<p><p>Low-light image enhancement (LLIE) investigates how to improve the brightness of an image captured in illumination-insufficient environments. The majority of existing methods enhance low-light images in a global and uniform manner, without taking into account the semantic information of different regions. Consequently, a network may easily deviate from the original color of local regions. To address this issue, we propose a semantic-aware knowledge-guided framework (SKF) that can assist a low-light enhancement model in learning rich and diverse priors encapsulated in a semantic segmentation model. We concentrate on incorporating semantic knowledge from three key aspects: a semantic-aware embedding module that adaptively integrates semantic priors in feature representation space, a semantic-guided color histogram loss that preserves color consistency of various instances, and a semantic-guided adversarial loss that produces more natural textures by semantic priors. Our SKF is appealing in acting as a general framework in the LLIE task. We further present a refined framework SKF++ with two new techniques: (a) Extra convolutional branch for intra-class illumination and color recovery through extracting local information and (b) Equalization-based histogram transformation for contrast enhancement and high dynamic range adjustment. Extensive experiments on various benchmarks of LLIE task and other image processing tasks show that models equipped with the SKF/SKF++ significantly outperform the baselines and our SKF/SKF++ generalizes to different models and scenes well. Besides, the potential benefits of our method in face detection and semantic segmentation in low-light conditions are discussed. The code and pre-trained models have been publicly available at https://github.com/langmanbusi/Semantic-Aware-Low-Light-Image-Enhancement.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TPAMI.2024.3432308","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Low-light image enhancement (LLIE) investigates how to improve the brightness of an image captured in illumination-insufficient environments. The majority of existing methods enhance low-light images in a global and uniform manner, without taking into account the semantic information of different regions. Consequently, a network may easily deviate from the original color of local regions. To address this issue, we propose a semantic-aware knowledge-guided framework (SKF) that can assist a low-light enhancement model in learning rich and diverse priors encapsulated in a semantic segmentation model. We concentrate on incorporating semantic knowledge from three key aspects: a semantic-aware embedding module that adaptively integrates semantic priors in feature representation space, a semantic-guided color histogram loss that preserves color consistency of various instances, and a semantic-guided adversarial loss that produces more natural textures by semantic priors. Our SKF is appealing in acting as a general framework in the LLIE task. We further present a refined framework SKF++ with two new techniques: (a) Extra convolutional branch for intra-class illumination and color recovery through extracting local information and (b) Equalization-based histogram transformation for contrast enhancement and high dynamic range adjustment. Extensive experiments on various benchmarks of LLIE task and other image processing tasks show that models equipped with the SKF/SKF++ significantly outperform the baselines and our SKF/SKF++ generalizes to different models and scenes well. Besides, the potential benefits of our method in face detection and semantic segmentation in low-light conditions are discussed. The code and pre-trained models have been publicly available at https://github.com/langmanbusi/Semantic-Aware-Low-Light-Image-Enhancement.

为单一图像增强和修复建立灵活的语义指导模型
低照度图像增强(LLIE)研究如何提高在照度不足的环境中拍摄的图像的亮度。现有的大多数低照度图像增强方法都是全局统一增强,没有考虑不同区域的语义信息。因此,网络很容易偏离局部区域的原始颜色。为了解决这个问题,我们提出了一种语义感知知识引导框架(SKF),它可以帮助弱光增强模型学习语义分割模型中丰富多样的先验信息。我们专注于从三个关键方面纳入语义知识:一个语义感知嵌入模块,可在特征表示空间中自适应地整合语义先验;一个语义指导的色彩直方图损失,可保持各种实例的色彩一致性;以及一个语义指导的对抗损失,可通过语义先验生成更自然的纹理。我们的 SKF 在作为 LLIE 任务的通用框架方面很有吸引力。我们进一步提出了改进框架 SKF++,其中包含两项新技术:(a) 额外卷积分支,通过提取局部信息实现类内光照和色彩恢复;(b) 基于均衡的直方图变换,用于对比度增强和高动态范围调整。在 LLIE 任务和其他图像处理任务的各种基准上进行的广泛实验表明,配备 SKF/SKF++ 的模型明显优于基线,而且我们的 SKF/SKF++ 还能很好地泛化到不同的模型和场景中。此外,还讨论了我们的方法在弱光条件下进行人脸检测和语义分割的潜在优势。代码和预训练模型已在 https://github.com/langmanbusi/Semantic-Aware-Low-Light-Image-Enhancement 公开。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信