空间导数制导的低光图像信噪比区域分异增强融合策略

IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Jiahui Zhu , Aoqun Jian , Le Yang , RunFang Hao , Luxiao Sang , Yang Ge , Rihui Kang , Shengbo Sang
{"title":"空间导数制导的低光图像信噪比区域分异增强融合策略","authors":"Jiahui Zhu ,&nbsp;Aoqun Jian ,&nbsp;Le Yang ,&nbsp;RunFang Hao ,&nbsp;Luxiao Sang ,&nbsp;Yang Ge ,&nbsp;Rihui Kang ,&nbsp;Shengbo Sang","doi":"10.1016/j.displa.2025.103219","DOIUrl":null,"url":null,"abstract":"<div><div>Low-light image enhancement aims to improve brightness and contrast while preserving image content. Research into this problem has made significant progress with the development of deep learning technology. However, the Signal-to-Noise Ratio(SNR) of different regions varies greatly when processing images with drastic changes in brightness. Existing methods often produce artifacts and noise that degrade image quality. To address these problems,the proposed method incorporates local and global prior knowledge into the image, employing an efficient local-to-local and local-to-global feature fusion mechanism. This facilitates the generation of enhanced images that exhibit enhanced naturalness and a broader color dynamic range. In this approach, a spatial derivative-guided SNR regional differentiation enhancement fusion strategy is introduced. The enhancement of low SNR regions is processed in the frequency domain using the Fast Fourier Transform (FFT) while the enhancement of high/normal SNR regions is handled by a convolutional encoder. The convolution residual block structure, which captures local information, generates short-range branches. The FFT module in the frequency domain generates long-range branches. The fusion of the two is guided by the SNR information of the original image. This approach also incorporates spatial derivatives as local priors in a low-light image enhancement network with an encoder–decoder structure. The encoder employs the symmetrical properties of the image’s spatial derivatives and incorporates correlating modules for the suppression of noise. Experiments conducted on disparate datasets illustrate that our approach outperforms existing state-of-the-art(SOTA) methods in terms of visual quality. Furthermore, the single-frame inference time can be reduced to 0.079 s.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103219"},"PeriodicalIF":3.4000,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Spatial derivative-guided SNR regional differentiation enhancement fusion strategy for low-light image enhancement\",\"authors\":\"Jiahui Zhu ,&nbsp;Aoqun Jian ,&nbsp;Le Yang ,&nbsp;RunFang Hao ,&nbsp;Luxiao Sang ,&nbsp;Yang Ge ,&nbsp;Rihui Kang ,&nbsp;Shengbo Sang\",\"doi\":\"10.1016/j.displa.2025.103219\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Low-light image enhancement aims to improve brightness and contrast while preserving image content. Research into this problem has made significant progress with the development of deep learning technology. However, the Signal-to-Noise Ratio(SNR) of different regions varies greatly when processing images with drastic changes in brightness. Existing methods often produce artifacts and noise that degrade image quality. To address these problems,the proposed method incorporates local and global prior knowledge into the image, employing an efficient local-to-local and local-to-global feature fusion mechanism. This facilitates the generation of enhanced images that exhibit enhanced naturalness and a broader color dynamic range. In this approach, a spatial derivative-guided SNR regional differentiation enhancement fusion strategy is introduced. The enhancement of low SNR regions is processed in the frequency domain using the Fast Fourier Transform (FFT) while the enhancement of high/normal SNR regions is handled by a convolutional encoder. The convolution residual block structure, which captures local information, generates short-range branches. The FFT module in the frequency domain generates long-range branches. The fusion of the two is guided by the SNR information of the original image. This approach also incorporates spatial derivatives as local priors in a low-light image enhancement network with an encoder–decoder structure. The encoder employs the symmetrical properties of the image’s spatial derivatives and incorporates correlating modules for the suppression of noise. Experiments conducted on disparate datasets illustrate that our approach outperforms existing state-of-the-art(SOTA) methods in terms of visual quality. Furthermore, the single-frame inference time can be reduced to 0.079 s.</div></div>\",\"PeriodicalId\":50570,\"journal\":{\"name\":\"Displays\",\"volume\":\"91 \",\"pages\":\"Article 103219\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-09-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Displays\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0141938225002562\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938225002562","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

弱光图像增强的目的是在保留图像内容的同时提高亮度和对比度。随着深度学习技术的发展,对这一问题的研究取得了重大进展。然而,在处理亮度剧烈变化的图像时,不同区域的信噪比(SNR)差异很大。现有的方法通常会产生降低图像质量的伪影和噪声。为了解决这些问题,该方法将局部和全局先验知识融合到图像中,采用有效的局部到局部和局部到全局特征融合机制。这有助于生成增强的图像,显示增强的自然度和更广泛的色彩动态范围。该方法引入了空间导数引导的信噪比区域分异增强融合策略。低信噪比区域的增强在频域使用快速傅里叶变换(FFT)处理,而高/正常信噪比区域的增强由卷积编码器处理。卷积残差块结构捕获局部信息,产生短距离分支。频域FFT模块产生远程支路。两者的融合以原始图像的信噪比信息为指导。该方法还在具有编码器-解码器结构的弱光图像增强网络中结合空间导数作为局部先验。该编码器利用图像空间导数的对称特性,并结合相关模块来抑制噪声。在不同的数据集上进行的实验表明,我们的方法在视觉质量方面优于现有的最先进(SOTA)方法。单帧推理时间可降至0.079 s。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Spatial derivative-guided SNR regional differentiation enhancement fusion strategy for low-light image enhancement
Low-light image enhancement aims to improve brightness and contrast while preserving image content. Research into this problem has made significant progress with the development of deep learning technology. However, the Signal-to-Noise Ratio(SNR) of different regions varies greatly when processing images with drastic changes in brightness. Existing methods often produce artifacts and noise that degrade image quality. To address these problems,the proposed method incorporates local and global prior knowledge into the image, employing an efficient local-to-local and local-to-global feature fusion mechanism. This facilitates the generation of enhanced images that exhibit enhanced naturalness and a broader color dynamic range. In this approach, a spatial derivative-guided SNR regional differentiation enhancement fusion strategy is introduced. The enhancement of low SNR regions is processed in the frequency domain using the Fast Fourier Transform (FFT) while the enhancement of high/normal SNR regions is handled by a convolutional encoder. The convolution residual block structure, which captures local information, generates short-range branches. The FFT module in the frequency domain generates long-range branches. The fusion of the two is guided by the SNR information of the original image. This approach also incorporates spatial derivatives as local priors in a low-light image enhancement network with an encoder–decoder structure. The encoder employs the symmetrical properties of the image’s spatial derivatives and incorporates correlating modules for the suppression of noise. Experiments conducted on disparate datasets illustrate that our approach outperforms existing state-of-the-art(SOTA) methods in terms of visual quality. Furthermore, the single-frame inference time can be reduced to 0.079 s.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Displays
Displays 工程技术-工程:电子与电气
CiteScore
4.60
自引率
25.60%
发文量
138
审稿时长
92 days
期刊介绍: Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface. Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信