{"title":"Discriminative Feature Extraction and Enhancement Network for Low-Light Image","authors":"Jiazhen Zu, Yongxia Zhou, Le Chen, Chao Dai","doi":"10.1109/ICAICE54393.2021.00158","DOIUrl":null,"url":null,"abstract":"Photos taken in low light conditions will cause a series of visual degradation phenomena due to underexposure, such as low brightness, loss of information, noise and color distortion. In order to solve the above problems, a discriminative feature extraction and enhancement network is proposed for low-light image enhancement. First, the shallow features are extracted by Inception V2,and the deep features are further extracted by the residual module. Then, the shallow and deep features are fused, and the fusion results are input into the discriminative feature enhancement module for enhancing. Specifically, the residual channel attention module is introduced after each stage to capture important feature information, which helps to restore the color of low-light images and reduce artifacts. Finally, the brightness adjustment module is used to adjust the brightness of the image. In addition, a hybrid loss function is designed to measure the loss of model training from multiple levels. The experimental results on the LOL-v2 dataset show that the proposed algorithm can reduce noise while improving image brightness, reduce color distortion and artifacts, and is superior to other related algorithms in objective indicators. The result maps are more real and natural in subjective vision.","PeriodicalId":388444,"journal":{"name":"2021 2nd International Conference on Artificial Intelligence and Computer Engineering (ICAICE)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 2nd International Conference on Artificial Intelligence and Computer Engineering (ICAICE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAICE54393.2021.00158","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Photos taken in low light conditions will cause a series of visual degradation phenomena due to underexposure, such as low brightness, loss of information, noise and color distortion. In order to solve the above problems, a discriminative feature extraction and enhancement network is proposed for low-light image enhancement. First, the shallow features are extracted by Inception V2,and the deep features are further extracted by the residual module. Then, the shallow and deep features are fused, and the fusion results are input into the discriminative feature enhancement module for enhancing. Specifically, the residual channel attention module is introduced after each stage to capture important feature information, which helps to restore the color of low-light images and reduce artifacts. Finally, the brightness adjustment module is used to adjust the brightness of the image. In addition, a hybrid loss function is designed to measure the loss of model training from multiple levels. The experimental results on the LOL-v2 dataset show that the proposed algorithm can reduce noise while improving image brightness, reduce color distortion and artifacts, and is superior to other related algorithms in objective indicators. The result maps are more real and natural in subjective vision.