图像翻译中保持视觉感知的去噪网络

N. Xu, Kangkang Song, Jiangjian Xiao, Chengbin Peng
{"title":"图像翻译中保持视觉感知的去噪网络","authors":"N. Xu, Kangkang Song, Jiangjian Xiao, Chengbin Peng","doi":"10.1109/ISPDS56360.2022.9874112","DOIUrl":null,"url":null,"abstract":"Image denoising is a fundamental problem in computer vision and has received much attention from scholars. With the fast development of convolutional neural networks, more and more deep learning-based noise reduction algorithms have emerged. However, current image denoising networks tend to apply image noise reduction only in the RGB color space, ignoring the information at the visual perception level, making the images generated by these algorithms too smooth and lacking texture and details. Therefore, this paper proposes a novel noise reduction network in the image translation area using deep learning feature space instead of the traditional RGB color space to restore more realistic and more detailed texture information in generated images. The network contains a visual perception generator and a multi-objective optimization network. The generator includes a multiscale encoding-decoding sub-network, which extracts high-level perception features from input images. The optimization network contains content consistency loss, multiscale adversarial generation loss, and discriminator feature alignment loss, which effectively retains detailed texture information in the images. We synthesized noise of suitable intensity based on publicly available data sets and conducted multiple experiments to verify the effectiveness of the algorithm. The experimental results show that the proposed algorithm significantly improves textures and details in denoised images. The algorithm removes a large amount of noise information while preserving lots of perceptual information at the visual level, generating more realistic images with detailed texture features.","PeriodicalId":280244,"journal":{"name":"2022 3rd International Conference on Information Science, Parallel and Distributed Systems (ISPDS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"visual perception preserved denoising network in Image translation\",\"authors\":\"N. Xu, Kangkang Song, Jiangjian Xiao, Chengbin Peng\",\"doi\":\"10.1109/ISPDS56360.2022.9874112\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Image denoising is a fundamental problem in computer vision and has received much attention from scholars. With the fast development of convolutional neural networks, more and more deep learning-based noise reduction algorithms have emerged. However, current image denoising networks tend to apply image noise reduction only in the RGB color space, ignoring the information at the visual perception level, making the images generated by these algorithms too smooth and lacking texture and details. Therefore, this paper proposes a novel noise reduction network in the image translation area using deep learning feature space instead of the traditional RGB color space to restore more realistic and more detailed texture information in generated images. The network contains a visual perception generator and a multi-objective optimization network. The generator includes a multiscale encoding-decoding sub-network, which extracts high-level perception features from input images. The optimization network contains content consistency loss, multiscale adversarial generation loss, and discriminator feature alignment loss, which effectively retains detailed texture information in the images. We synthesized noise of suitable intensity based on publicly available data sets and conducted multiple experiments to verify the effectiveness of the algorithm. The experimental results show that the proposed algorithm significantly improves textures and details in denoised images. The algorithm removes a large amount of noise information while preserving lots of perceptual information at the visual level, generating more realistic images with detailed texture features.\",\"PeriodicalId\":280244,\"journal\":{\"name\":\"2022 3rd International Conference on Information Science, Parallel and Distributed Systems (ISPDS)\",\"volume\":\"33 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 3rd International Conference on Information Science, Parallel and Distributed Systems (ISPDS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISPDS56360.2022.9874112\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 3rd International Conference on Information Science, Parallel and Distributed Systems (ISPDS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISPDS56360.2022.9874112","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

图像去噪是计算机视觉中的一个基本问题,受到了学者们的广泛关注。随着卷积神经网络的快速发展,越来越多基于深度学习的降噪算法应运而生。然而,目前的图像去噪网络往往只在RGB色彩空间中进行图像降噪,忽略了视觉感知层面的信息,使得这些算法生成的图像过于光滑,缺乏纹理和细节。因此,本文提出了一种新的图像翻译领域降噪网络,利用深度学习特征空间代替传统的RGB色彩空间,还原生成图像中更真实、更细致的纹理信息。该网络包含一个视觉感知生成器和一个多目标优化网络。该生成器包括一个多尺度编解码子网络,该子网络从输入图像中提取高级感知特征。该优化网络包含内容一致性损失、多尺度对抗生成损失和鉴别器特征对齐损失,有效地保留了图像中的详细纹理信息。我们基于公开的数据集合成了合适强度的噪声,并进行了多次实验来验证算法的有效性。实验结果表明,该算法显著改善了去噪图像的纹理和细节。该算法在去除大量噪声信息的同时,在视觉层面保留了大量的感知信息,生成的图像更加逼真,纹理特征更加细致。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
visual perception preserved denoising network in Image translation
Image denoising is a fundamental problem in computer vision and has received much attention from scholars. With the fast development of convolutional neural networks, more and more deep learning-based noise reduction algorithms have emerged. However, current image denoising networks tend to apply image noise reduction only in the RGB color space, ignoring the information at the visual perception level, making the images generated by these algorithms too smooth and lacking texture and details. Therefore, this paper proposes a novel noise reduction network in the image translation area using deep learning feature space instead of the traditional RGB color space to restore more realistic and more detailed texture information in generated images. The network contains a visual perception generator and a multi-objective optimization network. The generator includes a multiscale encoding-decoding sub-network, which extracts high-level perception features from input images. The optimization network contains content consistency loss, multiscale adversarial generation loss, and discriminator feature alignment loss, which effectively retains detailed texture information in the images. We synthesized noise of suitable intensity based on publicly available data sets and conducted multiple experiments to verify the effectiveness of the algorithm. The experimental results show that the proposed algorithm significantly improves textures and details in denoised images. The algorithm removes a large amount of noise information while preserving lots of perceptual information at the visual level, generating more realistic images with detailed texture features.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信