Kang Liu, Zhihao Xv, Zhe Yang, Lian Liu, Xinyu Li, Xiaopeng Hu
{"title":"Continuous detail enhancement framework for low-light image enhancement","authors":"Kang Liu, Zhihao Xv, Zhe Yang, Lian Liu, Xinyu Li, Xiaopeng Hu","doi":"10.1016/j.displa.2025.103040","DOIUrl":null,"url":null,"abstract":"<div><div>Low-light image enhancement is a crucial task for improving image quality in scenarios such as nighttime surveillance, autonomous driving at twilight, and low-light photography. Existing enhancement methods often focus on directly increasing brightness and contrast but neglect the importance of structural information, leading to information loss. In this paper, we propose a Continuous Detail Enhancement Framework for low-light image enhancement, termed as C-DEF. More specifically, we design an enhanced U-Net network that leverages dense connections to promote feature propagation to maintain consistency within the feature space and better preserve image details. Then, multi-perspective fusion enhancement module (MPFEM) is proposed to capture image features from multiple perspectives and further address the problem of feature space discontinuity. Moreover, an elaborate loss function drives the network to preserve critical information to achieve excess performance improvement. Extensive experiments on various benchmarks demonstrate the superiority of our method over state-of-the-art alternatives in both qualitative and quantitative evaluations. In addition, promising outcomes have been obtained by directly applying the trained model to the coal-rock dataset, indicating the model’s excellent generalization capability. The code is publicly available at <span><span>https://github.com/xv994/C-DEF</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103040"},"PeriodicalIF":3.7000,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938225000770","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Low-light image enhancement is a crucial task for improving image quality in scenarios such as nighttime surveillance, autonomous driving at twilight, and low-light photography. Existing enhancement methods often focus on directly increasing brightness and contrast but neglect the importance of structural information, leading to information loss. In this paper, we propose a Continuous Detail Enhancement Framework for low-light image enhancement, termed as C-DEF. More specifically, we design an enhanced U-Net network that leverages dense connections to promote feature propagation to maintain consistency within the feature space and better preserve image details. Then, multi-perspective fusion enhancement module (MPFEM) is proposed to capture image features from multiple perspectives and further address the problem of feature space discontinuity. Moreover, an elaborate loss function drives the network to preserve critical information to achieve excess performance improvement. Extensive experiments on various benchmarks demonstrate the superiority of our method over state-of-the-art alternatives in both qualitative and quantitative evaluations. In addition, promising outcomes have been obtained by directly applying the trained model to the coal-rock dataset, indicating the model’s excellent generalization capability. The code is publicly available at https://github.com/xv994/C-DEF.
期刊介绍:
Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface.
Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.