Arbitrary Style Transfer with Multiple Self-Attention

Yuzhu Song, Li Liu, Huaxiang Zhang, Dongmei Liu, Hongzhen Li
{"title":"Arbitrary Style Transfer with Multiple Self-Attention","authors":"Yuzhu Song, Li Liu, Huaxiang Zhang, Dongmei Liu, Hongzhen Li","doi":"10.1145/3599589.3599605","DOIUrl":null,"url":null,"abstract":"Style transfer aims to transfer the style information of a given style image to the other images, but most existing methods cannot transfer the texture details in style images well while maintaining the content structure. This paper proposes a novel arbitrary style transfer network that achieves arbitrary style transfer with more local style details through the cross-attention mechanism in visual transforms. The network uses a pre-trained VGG network to extract content and style features. The self-attention-based content and style enhancement module is utilized to enhance content and style feature representation. The transformer-based style cross-attention module is utilized to learn the relationship between content features and style features to transfer appropriate styles at each position of the content feature map and achieve style transfer with local details. Extensive experiments show that the proposed arbitrary style transfer network can generate high-quality stylized images with better visual quality.","PeriodicalId":123753,"journal":{"name":"Proceedings of the 2023 8th International Conference on Multimedia and Image Processing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 8th International Conference on Multimedia and Image Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3599589.3599605","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Style transfer aims to transfer the style information of a given style image to the other images, but most existing methods cannot transfer the texture details in style images well while maintaining the content structure. This paper proposes a novel arbitrary style transfer network that achieves arbitrary style transfer with more local style details through the cross-attention mechanism in visual transforms. The network uses a pre-trained VGG network to extract content and style features. The self-attention-based content and style enhancement module is utilized to enhance content and style feature representation. The transformer-based style cross-attention module is utilized to learn the relationship between content features and style features to transfer appropriate styles at each position of the content feature map and achieve style transfer with local details. Extensive experiments show that the proposed arbitrary style transfer network can generate high-quality stylized images with better visual quality.
多重自我关注下的任意风格转移
样式转移的目的是将给定样式图像的样式信息转移到其他图像中,但现有的大多数方法不能在保持内容结构的同时很好地转移样式图像中的纹理细节。本文提出了一种新的任意风格转移网络,通过视觉转换中的交叉注意机制,实现了具有更多局部风格细节的任意风格转移。该网络使用预训练的VGG网络提取内容和风格特征。利用基于自关注的内容和风格增强模块增强内容和风格特征表示。利用基于转换器的样式交叉关注模块,学习内容特征与样式特征之间的关系,在内容特征图的每个位置转移合适的样式,实现局部细节的样式转移。大量实验表明,所提出的任意风格转移网络可以生成具有较好视觉质量的高质量风格化图像。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信