微光图像增强的连续细节增强框架

IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Kang Liu, Zhihao Xv, Zhe Yang, Lian Liu, Xinyu Li, Xiaopeng Hu
{"title":"微光图像增强的连续细节增强框架","authors":"Kang Liu,&nbsp;Zhihao Xv,&nbsp;Zhe Yang,&nbsp;Lian Liu,&nbsp;Xinyu Li,&nbsp;Xiaopeng Hu","doi":"10.1016/j.displa.2025.103040","DOIUrl":null,"url":null,"abstract":"<div><div>Low-light image enhancement is a crucial task for improving image quality in scenarios such as nighttime surveillance, autonomous driving at twilight, and low-light photography. Existing enhancement methods often focus on directly increasing brightness and contrast but neglect the importance of structural information, leading to information loss. In this paper, we propose a Continuous Detail Enhancement Framework for low-light image enhancement, termed as C-DEF. More specifically, we design an enhanced U-Net network that leverages dense connections to promote feature propagation to maintain consistency within the feature space and better preserve image details. Then, multi-perspective fusion enhancement module (MPFEM) is proposed to capture image features from multiple perspectives and further address the problem of feature space discontinuity. Moreover, an elaborate loss function drives the network to preserve critical information to achieve excess performance improvement. Extensive experiments on various benchmarks demonstrate the superiority of our method over state-of-the-art alternatives in both qualitative and quantitative evaluations. In addition, promising outcomes have been obtained by directly applying the trained model to the coal-rock dataset, indicating the model’s excellent generalization capability. The code is publicly available at <span><span>https://github.com/xv994/C-DEF</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103040"},"PeriodicalIF":3.7000,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Continuous detail enhancement framework for low-light image enhancement\",\"authors\":\"Kang Liu,&nbsp;Zhihao Xv,&nbsp;Zhe Yang,&nbsp;Lian Liu,&nbsp;Xinyu Li,&nbsp;Xiaopeng Hu\",\"doi\":\"10.1016/j.displa.2025.103040\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Low-light image enhancement is a crucial task for improving image quality in scenarios such as nighttime surveillance, autonomous driving at twilight, and low-light photography. Existing enhancement methods often focus on directly increasing brightness and contrast but neglect the importance of structural information, leading to information loss. In this paper, we propose a Continuous Detail Enhancement Framework for low-light image enhancement, termed as C-DEF. More specifically, we design an enhanced U-Net network that leverages dense connections to promote feature propagation to maintain consistency within the feature space and better preserve image details. Then, multi-perspective fusion enhancement module (MPFEM) is proposed to capture image features from multiple perspectives and further address the problem of feature space discontinuity. Moreover, an elaborate loss function drives the network to preserve critical information to achieve excess performance improvement. Extensive experiments on various benchmarks demonstrate the superiority of our method over state-of-the-art alternatives in both qualitative and quantitative evaluations. In addition, promising outcomes have been obtained by directly applying the trained model to the coal-rock dataset, indicating the model’s excellent generalization capability. The code is publicly available at <span><span>https://github.com/xv994/C-DEF</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50570,\"journal\":{\"name\":\"Displays\",\"volume\":\"88 \",\"pages\":\"Article 103040\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2025-03-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Displays\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0141938225000770\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938225000770","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

微光图像增强是提高夜间监视、黄昏自动驾驶和微光摄影等场景图像质量的关键任务。现有的增强方法往往侧重于直接提高亮度和对比度,而忽略了结构信息的重要性,导致信息丢失。在本文中,我们提出了一种用于弱光图像增强的连续细节增强框架,称为C-DEF。更具体地说,我们设计了一个增强的U-Net网络,利用密集的连接来促进特征传播,以保持特征空间内的一致性,更好地保留图像细节。然后,提出了多视角融合增强模块(MPFEM),从多个角度捕获图像特征,进一步解决特征空间不连续的问题。此外,一个精心设计的损失函数驱动网络保留关键信息,以实现超额的性能改进。在各种基准上进行的广泛实验表明,我们的方法在定性和定量评估方面都优于最先进的替代方法。此外,将训练好的模型直接应用于煤岩数据集也获得了很好的结果,表明该模型具有良好的泛化能力。该代码可在https://github.com/xv994/C-DEF上公开获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Continuous detail enhancement framework for low-light image enhancement
Low-light image enhancement is a crucial task for improving image quality in scenarios such as nighttime surveillance, autonomous driving at twilight, and low-light photography. Existing enhancement methods often focus on directly increasing brightness and contrast but neglect the importance of structural information, leading to information loss. In this paper, we propose a Continuous Detail Enhancement Framework for low-light image enhancement, termed as C-DEF. More specifically, we design an enhanced U-Net network that leverages dense connections to promote feature propagation to maintain consistency within the feature space and better preserve image details. Then, multi-perspective fusion enhancement module (MPFEM) is proposed to capture image features from multiple perspectives and further address the problem of feature space discontinuity. Moreover, an elaborate loss function drives the network to preserve critical information to achieve excess performance improvement. Extensive experiments on various benchmarks demonstrate the superiority of our method over state-of-the-art alternatives in both qualitative and quantitative evaluations. In addition, promising outcomes have been obtained by directly applying the trained model to the coal-rock dataset, indicating the model’s excellent generalization capability. The code is publicly available at https://github.com/xv994/C-DEF.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Displays
Displays 工程技术-工程:电子与电气
CiteScore
4.60
自引率
25.60%
发文量
138
审稿时长
92 days
期刊介绍: Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface. Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信