Li Guo, Junwei Xie, Yuyang Xue, Ru Li, Weixin Zheng, T. Tong, Qinquan Gao
{"title":"GLNet: low-light image enhancement via grayscale priors","authors":"Li Guo, Junwei Xie, Yuyang Xue, Ru Li, Weixin Zheng, T. Tong, Qinquan Gao","doi":"10.1117/12.2631466","DOIUrl":null,"url":null,"abstract":"Low-light images are generally produced by shooting in a low light environment or a tricky shooting angle, which not only affect people's perception, but also leads to the bad performance of some artificial intelligence algorithms, such as object detection, super-resolution, and so on. There are two difficulties in the low-light enhancement algorithm: in the first place, applying image processing algorithms independently to each low-light image often leads to the color distortion; the second is the need to restore the texture of the extremely low-light area. To address these issues, we present two novel and general approaches: firstly, we propose a new loss function to constrain the ratio between the corresponding RGB pixel values on the low-light image and the high-light image; secondly, we propose a new framework named GLNet, which uses the dense residual connection block to obtain the deep features of the low-light images, and design a gray scale channel network branch to guide the texture restoration on the RGB channels by enhancing the grayscale image. The ablation experiments have demonstrated the effectiveness of the proposed module in this paper. Extensive quantitative and perceptual experiments show that our approach obtains state-of-the-art performance on the public dataset.","PeriodicalId":415097,"journal":{"name":"International Conference on Signal Processing Systems","volume":"217 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Signal Processing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2631466","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Low-light images are generally produced by shooting in a low light environment or a tricky shooting angle, which not only affect people's perception, but also leads to the bad performance of some artificial intelligence algorithms, such as object detection, super-resolution, and so on. There are two difficulties in the low-light enhancement algorithm: in the first place, applying image processing algorithms independently to each low-light image often leads to the color distortion; the second is the need to restore the texture of the extremely low-light area. To address these issues, we present two novel and general approaches: firstly, we propose a new loss function to constrain the ratio between the corresponding RGB pixel values on the low-light image and the high-light image; secondly, we propose a new framework named GLNet, which uses the dense residual connection block to obtain the deep features of the low-light images, and design a gray scale channel network branch to guide the texture restoration on the RGB channels by enhancing the grayscale image. The ablation experiments have demonstrated the effectiveness of the proposed module in this paper. Extensive quantitative and perceptual experiments show that our approach obtains state-of-the-art performance on the public dataset.