Lingyan Li, Chunzi Zhu, Jiale Chen, Baoshun Shi, Qiusheng Lian
{"title":"Self-Supervised Normalizing Flow for Jointing Low-Light Enhancement and Deblurring","authors":"Lingyan Li, Chunzi Zhu, Jiale Chen, Baoshun Shi, Qiusheng Lian","doi":"10.1007/s00034-024-02723-0","DOIUrl":null,"url":null,"abstract":"<p>Low-light image enhancement algorithms have been widely developed. Nevertheless, using long exposure under low-light conditions will lead to motion blurs of the captured images, which presents a challenge to address low-light enhancement and deblurring jointly. A recent effort called LEDNet addresses these issues by designing a encoder-decoder pipeline. However, LEDNet relies on paired data during training, but capturing low-blur and normal-sharp images of the same visual scene simultaneously is challenging. To overcome these challenges, we propose a self-supervised normalizing flow called SSFlow for jointing low-light enhancement and deblurring. SSFlow consists of two modules: an orthogonal channel attention U-Net (OAtt-UNet) module for extracting features, and a normalizing flow for correcting color and denoising (CCD flow). During the training of the SSFlow, the two modules are connected to each other by a color map. Concretely, OAtt-UNet module is a variant of U-Net consisting of an encoder and a decoder. OAtt-UNet module takes a low-light blurry image as input, and incorporates an orthogonal channel attention block into the encoder to improve the representation ability of the overall network. The filter adaptive convolutional layer is integrated into the decoder, applying a dynamic convolution filter to each element of the feature for effective deblurring. To extract color information and denoise, the CCD flow makes full use of the powerful learning ability of the normalizing flow. We construct an unsupervised loss function, continuously optimizing the network by using the consistent color map between the two modules in the color space. The effectiveness of our proposed network is demonstrated through both qualitative and quantitative experiments. Code is available at https://github.com/shibaoshun/SSFlow.</p>","PeriodicalId":10227,"journal":{"name":"Circuits, Systems and Signal Processing","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Circuits, Systems and Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s00034-024-02723-0","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Low-light image enhancement algorithms have been widely developed. Nevertheless, using long exposure under low-light conditions will lead to motion blurs of the captured images, which presents a challenge to address low-light enhancement and deblurring jointly. A recent effort called LEDNet addresses these issues by designing a encoder-decoder pipeline. However, LEDNet relies on paired data during training, but capturing low-blur and normal-sharp images of the same visual scene simultaneously is challenging. To overcome these challenges, we propose a self-supervised normalizing flow called SSFlow for jointing low-light enhancement and deblurring. SSFlow consists of two modules: an orthogonal channel attention U-Net (OAtt-UNet) module for extracting features, and a normalizing flow for correcting color and denoising (CCD flow). During the training of the SSFlow, the two modules are connected to each other by a color map. Concretely, OAtt-UNet module is a variant of U-Net consisting of an encoder and a decoder. OAtt-UNet module takes a low-light blurry image as input, and incorporates an orthogonal channel attention block into the encoder to improve the representation ability of the overall network. The filter adaptive convolutional layer is integrated into the decoder, applying a dynamic convolution filter to each element of the feature for effective deblurring. To extract color information and denoise, the CCD flow makes full use of the powerful learning ability of the normalizing flow. We construct an unsupervised loss function, continuously optimizing the network by using the consistent color map between the two modules in the color space. The effectiveness of our proposed network is demonstrated through both qualitative and quantitative experiments. Code is available at https://github.com/shibaoshun/SSFlow.
期刊介绍:
Rapid developments in the analog and digital processing of signals for communication, control, and computer systems have made the theory of electrical circuits and signal processing a burgeoning area of research and design. The aim of Circuits, Systems, and Signal Processing (CSSP) is to help meet the needs of outlets for significant research papers and state-of-the-art review articles in the area.
The scope of the journal is broad, ranging from mathematical foundations to practical engineering design. It encompasses, but is not limited to, such topics as linear and nonlinear networks, distributed circuits and systems, multi-dimensional signals and systems, analog filters and signal processing, digital filters and signal processing, statistical signal processing, multimedia, computer aided design, graph theory, neural systems, communication circuits and systems, and VLSI signal processing.
The Editorial Board is international, and papers are welcome from throughout the world. The journal is devoted primarily to research papers, but survey, expository, and tutorial papers are also published.
Circuits, Systems, and Signal Processing (CSSP) is published twelve times annually.