Xue Ke, Wei Lin, Gaojie Chen, Quan Chen, Xianzhi Qi, Jie Ma
{"title":"EDLLIE-Net: Enhanced Deep Convolutional Networks for Low-Light Image Enhancement","authors":"Xue Ke, Wei Lin, Gaojie Chen, Quan Chen, Xianzhi Qi, Jie Ma","doi":"10.1109/ICIVC50857.2020.9177454","DOIUrl":null,"url":null,"abstract":"Low-light image enhancement technology has been developed in recent years. However, most existing related methods need to adjust too many arguments or performs unstably when the environment differs greatly. In our paper, we propose a novel low-light image enhancement method named enhanced deep convolutional low-light image enhancement network (EDLLIE-Net) to address these problems. Firstly, our proposed method extracts multi-scale feature map, which can improve the utilization of context information. Subsequently, our proposed method rescales the feature map by attention mechanism to perceive the most useful information and characteristics. Finally, our proposed method uses encode-decode and residual-learning architecture to obtain the normal image from low-light image. To prove the effectiveness of our proposed model, we evaluate it from two aspects. On one hand, we show EDLLIE-Net can not only handle different dark scenes effectively but also achieve better performance than other representative methods by common metric judgement. On the other hand, a novel evaluation method by combining enhanced result and high-level vision task is proposed, we show our proposed method can gain the higher improvement degree for high-level vision tasks.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"111 1","pages":"59-68"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIVC50857.2020.9177454","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
Low-light image enhancement technology has been developed in recent years. However, most existing related methods need to adjust too many arguments or performs unstably when the environment differs greatly. In our paper, we propose a novel low-light image enhancement method named enhanced deep convolutional low-light image enhancement network (EDLLIE-Net) to address these problems. Firstly, our proposed method extracts multi-scale feature map, which can improve the utilization of context information. Subsequently, our proposed method rescales the feature map by attention mechanism to perceive the most useful information and characteristics. Finally, our proposed method uses encode-decode and residual-learning architecture to obtain the normal image from low-light image. To prove the effectiveness of our proposed model, we evaluate it from two aspects. On one hand, we show EDLLIE-Net can not only handle different dark scenes effectively but also achieve better performance than other representative methods by common metric judgement. On the other hand, a novel evaluation method by combining enhanced result and high-level vision task is proposed, we show our proposed method can gain the higher improvement degree for high-level vision tasks.