Low-light image enhancement network based on central difference convolution

IF 8 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Yong Chen , Shangming Chen , Huanlin Liu , Hangying Xiong , Yourui Zhang
{"title":"Low-light image enhancement network based on central difference convolution","authors":"Yong Chen ,&nbsp;Shangming Chen ,&nbsp;Huanlin Liu ,&nbsp;Hangying Xiong ,&nbsp;Yourui Zhang","doi":"10.1016/j.engappai.2025.111492","DOIUrl":null,"url":null,"abstract":"<div><div>Since the convolutional neural networks and transformers used in existing low-light image enhancement methods were prone to ignore high-frequency information, resulting in blurred details of the enhanced image, this affected the performance of computer vision tasks at night. Therefore, we propose a novel low-light image enhancement network based on central difference convolution (CDCLNet). This method uses traditional image processing methods to help the network extract high-frequency information. Specifically, firstly, in order to fully expose the hidden high-frequency details, the proposed method uses the multi-exposure strategy based on bright and dark masks to expose the image to different levels. Secondly, the complementary information between multi-exposure images is fused through the first-stage network. Finally, the second-stage network suppresses the amplified noise and enhances the details. In addition, We design a central difference convolution module (CDCM) with channel attention to adaptively extract gradient-level detailed features according to the need of the two-stage network. In order to make the network notice illumination non-uniformity, we propose a multi-scale feature attention module (MFAM), which extracts multi-scale features in each channel and generates channel-specific attention maps. Experiments on four public datasets show that the proposed method can enhance the details more effectively than mainstream methods, and achieves the highest structural similarity index on two paired datasets, with an average value of 0.899.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"158 ","pages":"Article 111492"},"PeriodicalIF":8.0000,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197625014940","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Since the convolutional neural networks and transformers used in existing low-light image enhancement methods were prone to ignore high-frequency information, resulting in blurred details of the enhanced image, this affected the performance of computer vision tasks at night. Therefore, we propose a novel low-light image enhancement network based on central difference convolution (CDCLNet). This method uses traditional image processing methods to help the network extract high-frequency information. Specifically, firstly, in order to fully expose the hidden high-frequency details, the proposed method uses the multi-exposure strategy based on bright and dark masks to expose the image to different levels. Secondly, the complementary information between multi-exposure images is fused through the first-stage network. Finally, the second-stage network suppresses the amplified noise and enhances the details. In addition, We design a central difference convolution module (CDCM) with channel attention to adaptively extract gradient-level detailed features according to the need of the two-stage network. In order to make the network notice illumination non-uniformity, we propose a multi-scale feature attention module (MFAM), which extracts multi-scale features in each channel and generates channel-specific attention maps. Experiments on four public datasets show that the proposed method can enhance the details more effectively than mainstream methods, and achieves the highest structural similarity index on two paired datasets, with an average value of 0.899.
基于中心差分卷积的弱光图像增强网络
由于现有的弱光图像增强方法中使用的卷积神经网络和变压器容易忽略高频信息,导致增强后的图像细节模糊,影响了夜间计算机视觉任务的性能。为此,我们提出了一种新的基于中心差分卷积(CDCLNet)的弱光图像增强网络。该方法利用传统的图像处理方法,帮助网络提取高频信息。具体而言,首先,为了充分暴露隐藏的高频细节,该方法采用基于明暗掩模的多重曝光策略,对图像进行不同程度的曝光;其次,通过第一阶段网络融合多曝光图像之间的互补信息;最后,第二阶段网络抑制了放大的噪声,增强了细节。此外,我们设计了一个具有通道关注的中心差分卷积模块(CDCM),根据两级网络的需要自适应提取梯度级细节特征。为了使网络注意到光照的非均匀性,我们提出了一种多尺度特征注意模块(MFAM),该模块提取每个通道中的多尺度特征并生成特定于通道的注意图。在4个公开数据集上的实验表明,该方法能比主流方法更有效地增强细节,在两对数据集上的结构相似度指数最高,平均值为0.899。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Engineering Applications of Artificial Intelligence
Engineering Applications of Artificial Intelligence 工程技术-工程:电子与电气
CiteScore
9.60
自引率
10.00%
发文量
505
审稿时长
68 days
期刊介绍: Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信