{"title":"DRIGNet: Low-light image enhancement based on dual-range information guidance","authors":"Feng Huang, Jiong Huang, Jing Wu, Jianhua Lin, Jing Guo, Yunxiang Li, Zhewei Liu","doi":"10.1016/j.displa.2025.103163","DOIUrl":null,"url":null,"abstract":"<div><div>The task of low-light image enhancement aims to reconstruct details and visual information from degraded low-light images. However, existing deep learning methods for feature processing usually lack feature differentiation or fail to implement reasonable differentiation handling, which can limit the quality of the enhanced images, leading to issues like color distortion and blurred details. To address these limitations, we propose Dual-Range Information Guidance Network (DRIGNet). Specifically, we develop an efficient U-shaped architecture Dual-Range Information Guided Framework (DGF). DGF decouples traditional image features into dual-range information while integrating stage-specific feature properties with the proposed dual-range information. We design the Global Dynamic Enhancement Module (GDEM) using channel interaction and the Detail Focus Module (DFM) with three-directional filter, both embedded in DGF to model long-range and short-range features respectively. Additionally, we introduce a feature fusion strategy, Attention-Guided Fusion Module (AGFM), which merges dual-range information, facilitating complementary enhancement. In the encoder, DRIGNet extracts coherent long-range information and enhances the global structure of the image; in the decoder, DRIGNet captures short-range information and fuse dual-rage information to restore detailed areas. Finally, we conduct extensive quantitative and qualitative experiments to demonstrate that the proposed DRIGNet outperforms the current State-of-the-Art (SOTA) methods across ten datasets.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103163"},"PeriodicalIF":3.4000,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938225002008","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
The task of low-light image enhancement aims to reconstruct details and visual information from degraded low-light images. However, existing deep learning methods for feature processing usually lack feature differentiation or fail to implement reasonable differentiation handling, which can limit the quality of the enhanced images, leading to issues like color distortion and blurred details. To address these limitations, we propose Dual-Range Information Guidance Network (DRIGNet). Specifically, we develop an efficient U-shaped architecture Dual-Range Information Guided Framework (DGF). DGF decouples traditional image features into dual-range information while integrating stage-specific feature properties with the proposed dual-range information. We design the Global Dynamic Enhancement Module (GDEM) using channel interaction and the Detail Focus Module (DFM) with three-directional filter, both embedded in DGF to model long-range and short-range features respectively. Additionally, we introduce a feature fusion strategy, Attention-Guided Fusion Module (AGFM), which merges dual-range information, facilitating complementary enhancement. In the encoder, DRIGNet extracts coherent long-range information and enhances the global structure of the image; in the decoder, DRIGNet captures short-range information and fuse dual-rage information to restore detailed areas. Finally, we conduct extensive quantitative and qualitative experiments to demonstrate that the proposed DRIGNet outperforms the current State-of-the-Art (SOTA) methods across ten datasets.
期刊介绍:
Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface.
Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.