CRNet: Unsupervised Color Retention Network for Blind Motion Deblurring

Suiyi Zhao, Zhao Zhang, Richang Hong, Mingliang Xu, Haijun Zhang, Meng Wang, Shuicheng Yan
{"title":"CRNet: Unsupervised Color Retention Network for Blind Motion Deblurring","authors":"Suiyi Zhao, Zhao Zhang, Richang Hong, Mingliang Xu, Haijun Zhang, Meng Wang, Shuicheng Yan","doi":"10.1145/3503161.3547962","DOIUrl":null,"url":null,"abstract":"Blind image deblurring is still a challenging problem due to the inherent ill-posed properties. To improve the deblurring performance, many supervised methods have been proposed. However, obtaining labeled samples from a specific distribution (or a domain) is usually expensive, and the data-driven training-based model also cannot be generalized to the blurry images in all domains. These challenges have given birth to certain unsupervised deblurring methods. However, there is a great chromatic aberration between the latent and original images, directly degrading the performance. In this paper, we therefore propose a novel unsupervised color retention network termed CRNet to perform blind motion deblurring. In addition, new concepts of blur offset estimation and adaptive blur correction are proposed to retain the color information when deblurring. As a result, unlike the previous studies, CRNet does not learn a mapping directly from the blurry image to the restored latent image, but from the blurry image to a motion offset. An adaptive blur correction operation is then performed on the blurry image to restore the latent image, thereby retaining the color information of the original image to the greatest extent. To further effectively retain the color information and extract the blur information, we also propose a new module called pyramid global blur feature perception (PGBFP). To quantitatively prove the effectiveness of our network in color retention, we propose a novel chromatic aberration quantization metrics in line with the human perception. Extensive quantitative and visualization experiments show that CRNet can obtain the state-of-the-art performance in unsupervised deblurring tasks.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 30th ACM International Conference on Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3503161.3547962","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

Abstract

Blind image deblurring is still a challenging problem due to the inherent ill-posed properties. To improve the deblurring performance, many supervised methods have been proposed. However, obtaining labeled samples from a specific distribution (or a domain) is usually expensive, and the data-driven training-based model also cannot be generalized to the blurry images in all domains. These challenges have given birth to certain unsupervised deblurring methods. However, there is a great chromatic aberration between the latent and original images, directly degrading the performance. In this paper, we therefore propose a novel unsupervised color retention network termed CRNet to perform blind motion deblurring. In addition, new concepts of blur offset estimation and adaptive blur correction are proposed to retain the color information when deblurring. As a result, unlike the previous studies, CRNet does not learn a mapping directly from the blurry image to the restored latent image, but from the blurry image to a motion offset. An adaptive blur correction operation is then performed on the blurry image to restore the latent image, thereby retaining the color information of the original image to the greatest extent. To further effectively retain the color information and extract the blur information, we also propose a new module called pyramid global blur feature perception (PGBFP). To quantitatively prove the effectiveness of our network in color retention, we propose a novel chromatic aberration quantization metrics in line with the human perception. Extensive quantitative and visualization experiments show that CRNet can obtain the state-of-the-art performance in unsupervised deblurring tasks.
用于盲运动去模糊的无监督色彩保留网络
由于图像固有的病态性,盲图像去模糊仍然是一个具有挑战性的问题。为了提高去模糊性能,人们提出了许多监督方法。然而,从一个特定的分布(或一个领域)中获取有标签的样本通常是昂贵的,并且基于数据驱动的训练模型也不能推广到所有领域的模糊图像。这些挑战催生了一些无监督的去模糊方法。但是,潜在图像和原始图像之间存在较大的色差,直接降低了性能。因此,在本文中,我们提出了一种称为CRNet的新型无监督颜色保留网络来执行盲运动去模糊。此外,还提出了模糊偏移估计和自适应模糊校正的新概念,以在去模糊时保留颜色信息。因此,与以往的研究不同,CRNet不是直接从模糊图像到恢复的潜在图像学习映射,而是从模糊图像到运动偏移。然后对模糊图像进行自适应模糊校正操作,恢复潜像,从而最大程度地保留原始图像的颜色信息。为了进一步有效地保留颜色信息和提取模糊信息,我们还提出了一个新的模块金字塔全局模糊特征感知(PGBFP)。为了定量地证明我们的网络在颜色保留方面的有效性,我们提出了一种新的符合人类感知的色差量化度量。大量的定量和可视化实验表明,CRNet可以在无监督去模糊任务中获得最先进的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信