怀旧网络:基于条件gan的旧图像降噪

O. Ramwala, Smeet A. Dhakecha, C. Paunwala, M. Paunwala
{"title":"怀旧网络:基于条件gan的旧图像降噪","authors":"O. Ramwala, Smeet A. Dhakecha, C. Paunwala, M. Paunwala","doi":"10.1142/S0219467821500509","DOIUrl":null,"url":null,"abstract":"Documents are an essential source of valuable information and knowledge, and photographs are a great way of reminiscing old memories and past events. However, it becomes difficult to preserve the quality of such ancient documents and old photographs for an extremely long time, as these images usually get damaged or creased due to various extrinsic effects. Utilizing image editing software like Photoshop to manually reconstruct such old photographs and documents is a strenuous and an enduring process. This paper attempts to leverage the generative modeling capabilities of Conditional Generative Adversarial Networks by utilizing specialized architectures for the Generator and the Discriminator. The proposed Reminiscent Net has a U-Net-based Generator with numerous feature maps for complete information transfer with the incorporation of location and contextual details, and the absence of dense layers allows utilization of diverse sized images. Implementation of the PatchGAN-based Discriminator that penalizes the image at the scale of patches has been proposed. NADAM optimizer has been implemented to enable faster and better convergence of the loss function. The proposed method produces visually appealing de-creased images, and experiments indicate that the architecture performs better than various novel approaches, both qualitatively and quantitatively.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Reminiscent Net: Conditional GAN-based Old Image De-Creasing\",\"authors\":\"O. Ramwala, Smeet A. Dhakecha, C. Paunwala, M. Paunwala\",\"doi\":\"10.1142/S0219467821500509\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Documents are an essential source of valuable information and knowledge, and photographs are a great way of reminiscing old memories and past events. However, it becomes difficult to preserve the quality of such ancient documents and old photographs for an extremely long time, as these images usually get damaged or creased due to various extrinsic effects. Utilizing image editing software like Photoshop to manually reconstruct such old photographs and documents is a strenuous and an enduring process. This paper attempts to leverage the generative modeling capabilities of Conditional Generative Adversarial Networks by utilizing specialized architectures for the Generator and the Discriminator. The proposed Reminiscent Net has a U-Net-based Generator with numerous feature maps for complete information transfer with the incorporation of location and contextual details, and the absence of dense layers allows utilization of diverse sized images. Implementation of the PatchGAN-based Discriminator that penalizes the image at the scale of patches has been proposed. NADAM optimizer has been implemented to enable faster and better convergence of the loss function. The proposed method produces visually appealing de-creased images, and experiments indicate that the architecture performs better than various novel approaches, both qualitatively and quantitatively.\",\"PeriodicalId\":177479,\"journal\":{\"name\":\"Int. J. Image Graph.\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-03-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Int. J. Image Graph.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1142/S0219467821500509\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Image Graph.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/S0219467821500509","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

文件是有价值的信息和知识的重要来源,照片是回忆旧记忆和过去事件的好方法。然而,由于各种外在影响,这些古老的文件和旧照片通常会受到损坏或皱褶,因此很难长时间保持其质量。利用Photoshop等图像编辑软件手动重建这些旧照片和文档是一个艰苦而持久的过程。本文试图利用条件生成对抗网络的生成建模能力,利用生成器和鉴别器的专门架构。提出的联想网络有一个基于u -Net的生成器,其中包含许多特征图,用于完整的信息传输,并结合了位置和上下文细节,并且没有密集层,可以利用不同大小的图像。提出了一种基于patchgan的判别器的实现,该判别器可以在补丁的尺度上对图像进行惩罚。为了使损失函数更快更好的收敛,实现了NADAM优化器。该方法产生了视觉上吸引人的减少图像,实验表明,该体系结构在定性和定量上都优于各种新方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Reminiscent Net: Conditional GAN-based Old Image De-Creasing
Documents are an essential source of valuable information and knowledge, and photographs are a great way of reminiscing old memories and past events. However, it becomes difficult to preserve the quality of such ancient documents and old photographs for an extremely long time, as these images usually get damaged or creased due to various extrinsic effects. Utilizing image editing software like Photoshop to manually reconstruct such old photographs and documents is a strenuous and an enduring process. This paper attempts to leverage the generative modeling capabilities of Conditional Generative Adversarial Networks by utilizing specialized architectures for the Generator and the Discriminator. The proposed Reminiscent Net has a U-Net-based Generator with numerous feature maps for complete information transfer with the incorporation of location and contextual details, and the absence of dense layers allows utilization of diverse sized images. Implementation of the PatchGAN-based Discriminator that penalizes the image at the scale of patches has been proposed. NADAM optimizer has been implemented to enable faster and better convergence of the loss function. The proposed method produces visually appealing de-creased images, and experiments indicate that the architecture performs better than various novel approaches, both qualitatively and quantitatively.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信