DRIGNet:基于双距离信息制导的微光图像增强

IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Feng Huang, Jiong Huang, Jing Wu, Jianhua Lin, Jing Guo, Yunxiang Li, Zhewei Liu
{"title":"DRIGNet:基于双距离信息制导的微光图像增强","authors":"Feng Huang,&nbsp;Jiong Huang,&nbsp;Jing Wu,&nbsp;Jianhua Lin,&nbsp;Jing Guo,&nbsp;Yunxiang Li,&nbsp;Zhewei Liu","doi":"10.1016/j.displa.2025.103163","DOIUrl":null,"url":null,"abstract":"<div><div>The task of low-light image enhancement aims to reconstruct details and visual information from degraded low-light images. However, existing deep learning methods for feature processing usually lack feature differentiation or fail to implement reasonable differentiation handling, which can limit the quality of the enhanced images, leading to issues like color distortion and blurred details. To address these limitations, we propose Dual-Range Information Guidance Network (DRIGNet). Specifically, we develop an efficient U-shaped architecture Dual-Range Information Guided Framework (DGF). DGF decouples traditional image features into dual-range information while integrating stage-specific feature properties with the proposed dual-range information. We design the Global Dynamic Enhancement Module (GDEM) using channel interaction and the Detail Focus Module (DFM) with three-directional filter, both embedded in DGF to model long-range and short-range features respectively. Additionally, we introduce a feature fusion strategy, Attention-Guided Fusion Module (AGFM), which merges dual-range information, facilitating complementary enhancement. In the encoder, DRIGNet extracts coherent long-range information and enhances the global structure of the image; in the decoder, DRIGNet captures short-range information and fuse dual-rage information to restore detailed areas. Finally, we conduct extensive quantitative and qualitative experiments to demonstrate that the proposed DRIGNet outperforms the current State-of-the-Art (SOTA) methods across ten datasets.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103163"},"PeriodicalIF":3.4000,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DRIGNet: Low-light image enhancement based on dual-range information guidance\",\"authors\":\"Feng Huang,&nbsp;Jiong Huang,&nbsp;Jing Wu,&nbsp;Jianhua Lin,&nbsp;Jing Guo,&nbsp;Yunxiang Li,&nbsp;Zhewei Liu\",\"doi\":\"10.1016/j.displa.2025.103163\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The task of low-light image enhancement aims to reconstruct details and visual information from degraded low-light images. However, existing deep learning methods for feature processing usually lack feature differentiation or fail to implement reasonable differentiation handling, which can limit the quality of the enhanced images, leading to issues like color distortion and blurred details. To address these limitations, we propose Dual-Range Information Guidance Network (DRIGNet). Specifically, we develop an efficient U-shaped architecture Dual-Range Information Guided Framework (DGF). DGF decouples traditional image features into dual-range information while integrating stage-specific feature properties with the proposed dual-range information. We design the Global Dynamic Enhancement Module (GDEM) using channel interaction and the Detail Focus Module (DFM) with three-directional filter, both embedded in DGF to model long-range and short-range features respectively. Additionally, we introduce a feature fusion strategy, Attention-Guided Fusion Module (AGFM), which merges dual-range information, facilitating complementary enhancement. In the encoder, DRIGNet extracts coherent long-range information and enhances the global structure of the image; in the decoder, DRIGNet captures short-range information and fuse dual-rage information to restore detailed areas. Finally, we conduct extensive quantitative and qualitative experiments to demonstrate that the proposed DRIGNet outperforms the current State-of-the-Art (SOTA) methods across ten datasets.</div></div>\",\"PeriodicalId\":50570,\"journal\":{\"name\":\"Displays\",\"volume\":\"90 \",\"pages\":\"Article 103163\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-07-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Displays\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0141938225002008\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938225002008","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

弱光图像增强的任务是对退化的弱光图像重建细节和视觉信息。然而,现有的深度学习特征处理方法通常缺乏特征区分或没有进行合理的区分处理,从而限制了增强图像的质量,导致颜色失真和细节模糊等问题。为了解决这些限制,我们提出了双距离信息制导网络(DRIGNet)。具体而言,我们开发了一种高效的u型结构双范围信息引导框架(DGF)。DGF将传统图像特征解耦为双距离信息,同时将特定阶段的特征属性与所提出的双距离信息相结合。我们设计了基于信道交互的全局动态增强模块(GDEM)和基于三向滤波器的细节聚焦模块(DFM),分别嵌入到DGF中来模拟远程和近距离特征。此外,我们还引入了一种特征融合策略,即注意力引导融合模块(AGFM),它融合了双距离信息,促进了互补增强。在编码器中,DRIGNet提取连贯的远程信息,增强图像的全局结构;在解码器中,DRIGNet捕获近程信息并融合双距离信息以恢复详细区域。最后,我们进行了大量的定量和定性实验,以证明所提出的DRIGNet在10个数据集上优于当前最先进的(SOTA)方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
DRIGNet: Low-light image enhancement based on dual-range information guidance
The task of low-light image enhancement aims to reconstruct details and visual information from degraded low-light images. However, existing deep learning methods for feature processing usually lack feature differentiation or fail to implement reasonable differentiation handling, which can limit the quality of the enhanced images, leading to issues like color distortion and blurred details. To address these limitations, we propose Dual-Range Information Guidance Network (DRIGNet). Specifically, we develop an efficient U-shaped architecture Dual-Range Information Guided Framework (DGF). DGF decouples traditional image features into dual-range information while integrating stage-specific feature properties with the proposed dual-range information. We design the Global Dynamic Enhancement Module (GDEM) using channel interaction and the Detail Focus Module (DFM) with three-directional filter, both embedded in DGF to model long-range and short-range features respectively. Additionally, we introduce a feature fusion strategy, Attention-Guided Fusion Module (AGFM), which merges dual-range information, facilitating complementary enhancement. In the encoder, DRIGNet extracts coherent long-range information and enhances the global structure of the image; in the decoder, DRIGNet captures short-range information and fuse dual-rage information to restore detailed areas. Finally, we conduct extensive quantitative and qualitative experiments to demonstrate that the proposed DRIGNet outperforms the current State-of-the-Art (SOTA) methods across ten datasets.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Displays
Displays 工程技术-工程:电子与电气
CiteScore
4.60
自引率
25.60%
发文量
138
审稿时长
92 days
期刊介绍: Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface. Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信