Geometric-Aware Low-Light Image and Video Enhancement via Depth Guidance

IF 13.7
Yingqi Lin;Xiaogang Xu;Jiafei Wu;Yan Han;Zhe Liu
{"title":"Geometric-Aware Low-Light Image and Video Enhancement via Depth Guidance","authors":"Yingqi Lin;Xiaogang Xu;Jiafei Wu;Yan Han;Zhe Liu","doi":"10.1109/TIP.2025.3597046","DOIUrl":null,"url":null,"abstract":"Low-Light Enhancement (LLE) is aimed at improving the quality of photos/videos captured under low-light conditions. It is worth noting that most existing LLE methods do not take advantage of geometric modeling. We believe that incorporating geometric information can enhance LLE performance, as it provides insights into the physical structure of the scene that influences illumination conditions. To address this, we propose a Geometry-Guided Low-Light Enhancement Refine Framework (GG-LLERF) designed to assist low-light enhancement models in learning improved features by integrating geometric priors into the feature representation space. In this paper, we employ depth priors as the geometric representation. Our approach focuses on the integration of depth priors into various LLE frameworks using a unified methodology. This methodology comprises two key novel modules. First, a depth-aware feature extraction module is designed to inject depth priors into the image representation. Then, the Hierarchical Depth-Guided Feature Fusion Module (HDGFFM) is formulated with a cross-domain attention mechanism, which combines depth-aware features with the original image features within LLE models. We conducted extensive experiments on public low-light image and video enhancement benchmarks. The results illustrate that our framework significantly enhances existing LLE methods. The source code and pre-trained models are available at <uri>https://github.com/Estheryingqi/GG-LLERF</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"5442-5457"},"PeriodicalIF":13.7000,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11125863/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Low-Light Enhancement (LLE) is aimed at improving the quality of photos/videos captured under low-light conditions. It is worth noting that most existing LLE methods do not take advantage of geometric modeling. We believe that incorporating geometric information can enhance LLE performance, as it provides insights into the physical structure of the scene that influences illumination conditions. To address this, we propose a Geometry-Guided Low-Light Enhancement Refine Framework (GG-LLERF) designed to assist low-light enhancement models in learning improved features by integrating geometric priors into the feature representation space. In this paper, we employ depth priors as the geometric representation. Our approach focuses on the integration of depth priors into various LLE frameworks using a unified methodology. This methodology comprises two key novel modules. First, a depth-aware feature extraction module is designed to inject depth priors into the image representation. Then, the Hierarchical Depth-Guided Feature Fusion Module (HDGFFM) is formulated with a cross-domain attention mechanism, which combines depth-aware features with the original image features within LLE models. We conducted extensive experiments on public low-light image and video enhancement benchmarks. The results illustrate that our framework significantly enhances existing LLE methods. The source code and pre-trained models are available at https://github.com/Estheryingqi/GG-LLERF
基于深度引导的几何感知低光图像和视频增强。
弱光增强(LLE)旨在提高在弱光条件下拍摄的照片/视频的质量。值得注意的是,大多数现有的LLE方法都没有利用几何建模的优势。我们相信,结合几何信息可以提高LLE的性能,因为它提供了对影响照明条件的场景物理结构的见解。为了解决这个问题,我们提出了一个几何引导的弱光增强细化框架(GG-LLERF),旨在通过将几何先验整合到特征表示空间中来帮助弱光增强模型学习改进的特征。在本文中,我们采用深度先验作为几何表示。我们的方法侧重于使用统一的方法将深度先验集成到各种LLE框架中。该方法包括两个关键的新颖模块。首先,设计深度感知特征提取模块,在图像表示中注入深度先验。然后,结合LLE模型中深度感知特征与原始图像特征的跨域关注机制,构建了层次深度引导特征融合模块(HDGFFM);我们就公众低光图像和视频增强基准进行了广泛的实验。结果表明,我们的框架显著增强了现有的LLE方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信