联合低照度增强和去模糊的自监督归一化流程

IF 1.8 3区 工程技术 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC
Lingyan Li, Chunzi Zhu, Jiale Chen, Baoshun Shi, Qiusheng Lian
{"title":"联合低照度增强和去模糊的自监督归一化流程","authors":"Lingyan Li, Chunzi Zhu, Jiale Chen, Baoshun Shi, Qiusheng Lian","doi":"10.1007/s00034-024-02723-0","DOIUrl":null,"url":null,"abstract":"<p>Low-light image enhancement algorithms have been widely developed. Nevertheless, using long exposure under low-light conditions will lead to motion blurs of the captured images, which presents a challenge to address low-light enhancement and deblurring jointly. A recent effort called LEDNet addresses these issues by designing a encoder-decoder pipeline. However, LEDNet relies on paired data during training, but capturing low-blur and normal-sharp images of the same visual scene simultaneously is challenging. To overcome these challenges, we propose a self-supervised normalizing flow called SSFlow for jointing low-light enhancement and deblurring. SSFlow consists of two modules: an orthogonal channel attention U-Net (OAtt-UNet) module for extracting features, and a normalizing flow for correcting color and denoising (CCD flow). During the training of the SSFlow, the two modules are connected to each other by a color map. Concretely, OAtt-UNet module is a variant of U-Net consisting of an encoder and a decoder. OAtt-UNet module takes a low-light blurry image as input, and incorporates an orthogonal channel attention block into the encoder to improve the representation ability of the overall network. The filter adaptive convolutional layer is integrated into the decoder, applying a dynamic convolution filter to each element of the feature for effective deblurring. To extract color information and denoise, the CCD flow makes full use of the powerful learning ability of the normalizing flow. We construct an unsupervised loss function, continuously optimizing the network by using the consistent color map between the two modules in the color space. The effectiveness of our proposed network is demonstrated through both qualitative and quantitative experiments. Code is available at https://github.com/shibaoshun/SSFlow.</p>","PeriodicalId":10227,"journal":{"name":"Circuits, Systems and Signal Processing","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Self-Supervised Normalizing Flow for Jointing Low-Light Enhancement and Deblurring\",\"authors\":\"Lingyan Li, Chunzi Zhu, Jiale Chen, Baoshun Shi, Qiusheng Lian\",\"doi\":\"10.1007/s00034-024-02723-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Low-light image enhancement algorithms have been widely developed. Nevertheless, using long exposure under low-light conditions will lead to motion blurs of the captured images, which presents a challenge to address low-light enhancement and deblurring jointly. A recent effort called LEDNet addresses these issues by designing a encoder-decoder pipeline. However, LEDNet relies on paired data during training, but capturing low-blur and normal-sharp images of the same visual scene simultaneously is challenging. To overcome these challenges, we propose a self-supervised normalizing flow called SSFlow for jointing low-light enhancement and deblurring. SSFlow consists of two modules: an orthogonal channel attention U-Net (OAtt-UNet) module for extracting features, and a normalizing flow for correcting color and denoising (CCD flow). During the training of the SSFlow, the two modules are connected to each other by a color map. Concretely, OAtt-UNet module is a variant of U-Net consisting of an encoder and a decoder. OAtt-UNet module takes a low-light blurry image as input, and incorporates an orthogonal channel attention block into the encoder to improve the representation ability of the overall network. The filter adaptive convolutional layer is integrated into the decoder, applying a dynamic convolution filter to each element of the feature for effective deblurring. To extract color information and denoise, the CCD flow makes full use of the powerful learning ability of the normalizing flow. We construct an unsupervised loss function, continuously optimizing the network by using the consistent color map between the two modules in the color space. The effectiveness of our proposed network is demonstrated through both qualitative and quantitative experiments. Code is available at https://github.com/shibaoshun/SSFlow.</p>\",\"PeriodicalId\":10227,\"journal\":{\"name\":\"Circuits, Systems and Signal Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2024-05-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Circuits, Systems and Signal Processing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1007/s00034-024-02723-0\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Circuits, Systems and Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s00034-024-02723-0","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

低照度图像增强算法已被广泛开发。然而,在低照度条件下使用长时间曝光会导致捕捉到的图像出现运动模糊,这给同时解决低照度增强和去模糊问题带来了挑战。最近一项名为 LEDNet 的研究通过设计编码器-解码器管道解决了这些问题。然而,LEDNet 在训练过程中依赖于配对数据,但同时捕捉同一视觉场景的低模糊和正常清晰图像具有挑战性。为了克服这些挑战,我们提出了一种名为 SSFlow 的自监督归一化流程,用于联合低照度增强和去模糊。SSFlow 由两个模块组成:一个是用于提取特征的正交信道注意 U-Net(OAtt-UNet)模块,另一个是用于校正颜色和去模糊的归一化流程(CCD 流程)。在 SSFlow 的训练过程中,这两个模块通过颜色图相互连接。具体来说,OAtt-UNet 模块是 U-Net 的一种变体,由编码器和解码器组成。OAtt-UNet 模块以弱光模糊图像为输入,在编码器中加入正交信道注意块,以提高整个网络的表示能力。滤波器自适应卷积层被集成到解码器中,对特征的每个元素应用动态卷积滤波器,以有效去模糊。为了提取颜色信息和去噪,CCD 流程充分利用了归一化流程的强大学习能力。我们构建了一个无监督损失函数,利用色彩空间中两个模块之间一致的色彩映射来不断优化网络。我们提出的网络通过定性和定量实验证明了其有效性。代码见 https://github.com/shibaoshun/SSFlow。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Self-Supervised Normalizing Flow for Jointing Low-Light Enhancement and Deblurring

Self-Supervised Normalizing Flow for Jointing Low-Light Enhancement and Deblurring

Low-light image enhancement algorithms have been widely developed. Nevertheless, using long exposure under low-light conditions will lead to motion blurs of the captured images, which presents a challenge to address low-light enhancement and deblurring jointly. A recent effort called LEDNet addresses these issues by designing a encoder-decoder pipeline. However, LEDNet relies on paired data during training, but capturing low-blur and normal-sharp images of the same visual scene simultaneously is challenging. To overcome these challenges, we propose a self-supervised normalizing flow called SSFlow for jointing low-light enhancement and deblurring. SSFlow consists of two modules: an orthogonal channel attention U-Net (OAtt-UNet) module for extracting features, and a normalizing flow for correcting color and denoising (CCD flow). During the training of the SSFlow, the two modules are connected to each other by a color map. Concretely, OAtt-UNet module is a variant of U-Net consisting of an encoder and a decoder. OAtt-UNet module takes a low-light blurry image as input, and incorporates an orthogonal channel attention block into the encoder to improve the representation ability of the overall network. The filter adaptive convolutional layer is integrated into the decoder, applying a dynamic convolution filter to each element of the feature for effective deblurring. To extract color information and denoise, the CCD flow makes full use of the powerful learning ability of the normalizing flow. We construct an unsupervised loss function, continuously optimizing the network by using the consistent color map between the two modules in the color space. The effectiveness of our proposed network is demonstrated through both qualitative and quantitative experiments. Code is available at https://github.com/shibaoshun/SSFlow.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Circuits, Systems and Signal Processing
Circuits, Systems and Signal Processing 工程技术-工程:电子与电气
CiteScore
4.80
自引率
13.00%
发文量
321
审稿时长
4.6 months
期刊介绍: Rapid developments in the analog and digital processing of signals for communication, control, and computer systems have made the theory of electrical circuits and signal processing a burgeoning area of research and design. The aim of Circuits, Systems, and Signal Processing (CSSP) is to help meet the needs of outlets for significant research papers and state-of-the-art review articles in the area. The scope of the journal is broad, ranging from mathematical foundations to practical engineering design. It encompasses, but is not limited to, such topics as linear and nonlinear networks, distributed circuits and systems, multi-dimensional signals and systems, analog filters and signal processing, digital filters and signal processing, statistical signal processing, multimedia, computer aided design, graph theory, neural systems, communication circuits and systems, and VLSI signal processing. The Editorial Board is international, and papers are welcome from throughout the world. The journal is devoted primarily to research papers, but survey, expository, and tutorial papers are also published. Circuits, Systems, and Signal Processing (CSSP) is published twelve times annually.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信