Structure Guided Photorealistic Style Transfer

Yuheng Zhi, Huawei Wei, Bingbing Ni
{"title":"Structure Guided Photorealistic Style Transfer","authors":"Yuheng Zhi, Huawei Wei, Bingbing Ni","doi":"10.1145/3240508.3240637","DOIUrl":null,"url":null,"abstract":"Recent style transfer methods based on deep networks strive to generate more content matching stylized images by adding semantic guidance in the iterative process. However, these approaches can just guarantee the transfer of integral color and texture distribution between semantically equivalent regions, but local variation within these regions cannot be accurately captured. Therefore, the resulting image lacks local plausibility. To this end, we develop a non-parametric patch based style transfer framework to synthesize more content coherent images. By designing a novel patch matching algorithm which simultaneously takes high-level category information and geometric structure information (e.g., human pose and building structure) into account, our proposed method is capable of transferring more detailed distribution and producing more photorealistic stylized images. We show that our approach achieves remarkable style transfer results on contents with geometric structure, including human body, vehicles, buildings, etc.","PeriodicalId":339857,"journal":{"name":"Proceedings of the 26th ACM international conference on Multimedia","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 26th ACM international conference on Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3240508.3240637","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Recent style transfer methods based on deep networks strive to generate more content matching stylized images by adding semantic guidance in the iterative process. However, these approaches can just guarantee the transfer of integral color and texture distribution between semantically equivalent regions, but local variation within these regions cannot be accurately captured. Therefore, the resulting image lacks local plausibility. To this end, we develop a non-parametric patch based style transfer framework to synthesize more content coherent images. By designing a novel patch matching algorithm which simultaneously takes high-level category information and geometric structure information (e.g., human pose and building structure) into account, our proposed method is capable of transferring more detailed distribution and producing more photorealistic stylized images. We show that our approach achieves remarkable style transfer results on contents with geometric structure, including human body, vehicles, buildings, etc.
结构引导的逼真风格转移
最近的基于深度网络的风格迁移方法通过在迭代过程中添加语义引导,力求生成更多与风格化图像匹配的内容。然而,这些方法只能保证整体颜色和纹理分布在语义等效区域之间的传递,而不能准确捕获这些区域内的局部变化。因此,得到的图像缺乏局部合理性。为此,我们开发了一个基于非参数补丁的风格转移框架来合成更多内容连贯的图像。通过设计一种同时考虑高级类别信息和几何结构信息(如人体姿态和建筑结构)的新颖的补丁匹配算法,我们提出的方法能够传递更详细的分布,产生更逼真的风格化图像。结果表明,该方法在人体、车辆、建筑物等具有几何结构的内容上取得了显著的风格迁移效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信