Image Inpainting with Context Flow Network

Jian Liu, Jiarui Xue, Juan Zhang, Ying Yang
{"title":"Image Inpainting with Context Flow Network","authors":"Jian Liu, Jiarui Xue, Juan Zhang, Ying Yang","doi":"10.1109/ICTAI56018.2022.00141","DOIUrl":null,"url":null,"abstract":"Image inpainting using deep and complicated convolutional neural networks(CNN) has recently produced outstanding results. Several researchers have considered employing large receptive fields and deep networks for long-distance information transfer to obtain semantically coherent inpainting results. As a side effect, these strategies would lead to the loss of detail and other artifacts. Motivated by the attention mechanism and sequence-to-sequence model, a novel convolution structure called context flow module is introduced into a coarse-to-fine two stages network, extracting information from distant regions without extra network layers or details loss. The context flow module in the refinement network can effectively gather both spatial and contextual data in the distance, and flow information to the next layer patch by patch. The coarse and refinement networks' backbones are encoder-decoder architecture stacked with gated and dilated convolutions. The refinement network encloses two extra elements: the context flow module and a feature-sharing space. The coarse network generates semantically consistent images with no gaps. The refinement network enhances the sharpness and enriches the details of the initial results. Moreover, a patch-based GAN is applied to stabilize training and generate semantically reasonable results. Experimental results show that our method excels at the performance of the state-of-the-art methods on faces(CelebA), buildings(Paris Street View), and natural images(Places2) datasets. The proposed context flow module can be easily integrated with any existing networks to improve their inpainting performance.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI56018.2022.00141","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Image inpainting using deep and complicated convolutional neural networks(CNN) has recently produced outstanding results. Several researchers have considered employing large receptive fields and deep networks for long-distance information transfer to obtain semantically coherent inpainting results. As a side effect, these strategies would lead to the loss of detail and other artifacts. Motivated by the attention mechanism and sequence-to-sequence model, a novel convolution structure called context flow module is introduced into a coarse-to-fine two stages network, extracting information from distant regions without extra network layers or details loss. The context flow module in the refinement network can effectively gather both spatial and contextual data in the distance, and flow information to the next layer patch by patch. The coarse and refinement networks' backbones are encoder-decoder architecture stacked with gated and dilated convolutions. The refinement network encloses two extra elements: the context flow module and a feature-sharing space. The coarse network generates semantically consistent images with no gaps. The refinement network enhances the sharpness and enriches the details of the initial results. Moreover, a patch-based GAN is applied to stabilize training and generate semantically reasonable results. Experimental results show that our method excels at the performance of the state-of-the-art methods on faces(CelebA), buildings(Paris Street View), and natural images(Places2) datasets. The proposed context flow module can be easily integrated with any existing networks to improve their inpainting performance.
使用上下文流网络的图像绘制
使用深度和复杂卷积神经网络(CNN)的图像绘制最近取得了突出的成果。一些研究人员已经考虑使用大的接受场和深度网络进行长距离信息传递,以获得语义连贯的绘画结果。作为副作用,这些策略会导致细节和其他工件的丢失。在注意机制和序列到序列模型的驱动下,在粗到细两阶段网络中引入了一种新的卷积结构上下文流模块,在不增加网络层和细节损失的情况下,从遥远的区域提取信息。细化网络中的上下文流模块可以有效地收集远距离的空间数据和上下文数据,并逐块地将信息流到下一层。粗网络和精网络的主干是编码-解码器结构,叠加了门控和扩展卷积。细化网络包含两个额外的元素:上下文流模块和特征共享空间。粗网络生成语义一致的无间隙图像。细化网络增强了初始结果的清晰度,丰富了初始结果的细节。此外,采用基于patch的GAN来稳定训练,生成语义合理的结果。实验结果表明,我们的方法在人脸(CelebA)、建筑物(巴黎街景)和自然图像(Places2)数据集上的性能优于最先进的方法。所提出的上下文流模块可以很容易地与任何现有网络集成,以提高其绘制性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信