Jiahui Zhu , Aoqun Jian , Le Yang , RunFang Hao , Luxiao Sang , Yang Ge , Rihui Kang , Shengbo Sang
{"title":"空间导数制导的低光图像信噪比区域分异增强融合策略","authors":"Jiahui Zhu , Aoqun Jian , Le Yang , RunFang Hao , Luxiao Sang , Yang Ge , Rihui Kang , Shengbo Sang","doi":"10.1016/j.displa.2025.103219","DOIUrl":null,"url":null,"abstract":"<div><div>Low-light image enhancement aims to improve brightness and contrast while preserving image content. Research into this problem has made significant progress with the development of deep learning technology. However, the Signal-to-Noise Ratio(SNR) of different regions varies greatly when processing images with drastic changes in brightness. Existing methods often produce artifacts and noise that degrade image quality. To address these problems,the proposed method incorporates local and global prior knowledge into the image, employing an efficient local-to-local and local-to-global feature fusion mechanism. This facilitates the generation of enhanced images that exhibit enhanced naturalness and a broader color dynamic range. In this approach, a spatial derivative-guided SNR regional differentiation enhancement fusion strategy is introduced. The enhancement of low SNR regions is processed in the frequency domain using the Fast Fourier Transform (FFT) while the enhancement of high/normal SNR regions is handled by a convolutional encoder. The convolution residual block structure, which captures local information, generates short-range branches. The FFT module in the frequency domain generates long-range branches. The fusion of the two is guided by the SNR information of the original image. This approach also incorporates spatial derivatives as local priors in a low-light image enhancement network with an encoder–decoder structure. The encoder employs the symmetrical properties of the image’s spatial derivatives and incorporates correlating modules for the suppression of noise. Experiments conducted on disparate datasets illustrate that our approach outperforms existing state-of-the-art(SOTA) methods in terms of visual quality. Furthermore, the single-frame inference time can be reduced to 0.079 s.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103219"},"PeriodicalIF":3.4000,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Spatial derivative-guided SNR regional differentiation enhancement fusion strategy for low-light image enhancement\",\"authors\":\"Jiahui Zhu , Aoqun Jian , Le Yang , RunFang Hao , Luxiao Sang , Yang Ge , Rihui Kang , Shengbo Sang\",\"doi\":\"10.1016/j.displa.2025.103219\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Low-light image enhancement aims to improve brightness and contrast while preserving image content. Research into this problem has made significant progress with the development of deep learning technology. However, the Signal-to-Noise Ratio(SNR) of different regions varies greatly when processing images with drastic changes in brightness. Existing methods often produce artifacts and noise that degrade image quality. To address these problems,the proposed method incorporates local and global prior knowledge into the image, employing an efficient local-to-local and local-to-global feature fusion mechanism. This facilitates the generation of enhanced images that exhibit enhanced naturalness and a broader color dynamic range. In this approach, a spatial derivative-guided SNR regional differentiation enhancement fusion strategy is introduced. The enhancement of low SNR regions is processed in the frequency domain using the Fast Fourier Transform (FFT) while the enhancement of high/normal SNR regions is handled by a convolutional encoder. The convolution residual block structure, which captures local information, generates short-range branches. The FFT module in the frequency domain generates long-range branches. The fusion of the two is guided by the SNR information of the original image. This approach also incorporates spatial derivatives as local priors in a low-light image enhancement network with an encoder–decoder structure. The encoder employs the symmetrical properties of the image’s spatial derivatives and incorporates correlating modules for the suppression of noise. Experiments conducted on disparate datasets illustrate that our approach outperforms existing state-of-the-art(SOTA) methods in terms of visual quality. Furthermore, the single-frame inference time can be reduced to 0.079 s.</div></div>\",\"PeriodicalId\":50570,\"journal\":{\"name\":\"Displays\",\"volume\":\"91 \",\"pages\":\"Article 103219\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-09-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Displays\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0141938225002562\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938225002562","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
Low-light image enhancement aims to improve brightness and contrast while preserving image content. Research into this problem has made significant progress with the development of deep learning technology. However, the Signal-to-Noise Ratio(SNR) of different regions varies greatly when processing images with drastic changes in brightness. Existing methods often produce artifacts and noise that degrade image quality. To address these problems,the proposed method incorporates local and global prior knowledge into the image, employing an efficient local-to-local and local-to-global feature fusion mechanism. This facilitates the generation of enhanced images that exhibit enhanced naturalness and a broader color dynamic range. In this approach, a spatial derivative-guided SNR regional differentiation enhancement fusion strategy is introduced. The enhancement of low SNR regions is processed in the frequency domain using the Fast Fourier Transform (FFT) while the enhancement of high/normal SNR regions is handled by a convolutional encoder. The convolution residual block structure, which captures local information, generates short-range branches. The FFT module in the frequency domain generates long-range branches. The fusion of the two is guided by the SNR information of the original image. This approach also incorporates spatial derivatives as local priors in a low-light image enhancement network with an encoder–decoder structure. The encoder employs the symmetrical properties of the image’s spatial derivatives and incorporates correlating modules for the suppression of noise. Experiments conducted on disparate datasets illustrate that our approach outperforms existing state-of-the-art(SOTA) methods in terms of visual quality. Furthermore, the single-frame inference time can be reduced to 0.079 s.
期刊介绍:
Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface.
Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.