{"title":"结合自注意机制的小波网络一般风格迁移模型","authors":"Mingqiang Yan, Pengfei Yu, Haiyan Li, Hongsong Li","doi":"10.1109/ITOEC53115.2022.9734489","DOIUrl":null,"url":null,"abstract":"Artistic style transfer refers to using two images (content image and style image) as a reference, preserving the content of the content image as much as possible and transferring the style characteristics of the style image to content image. Existing methods usually use various normalization techniques, but these techniques have limitations in completely transferring different textures to different spatial locations. The method based on self-attention has solved this problem and made some progress, but there are also unobvious image textures, resulting in unnecessary artifacts. There are also attempts to add a mask to fix the image layout for style transfer, but it is difficult to produce a coordinated result at the mask boundary. Someone tried to embed the wavelet network in the VGG network to obtain more detailed stylized images, and indeed achieved more visually pleasing results. To solve these problems, this paper attempts to combine the advantages of wavelet transform, self-attention mechanism, whitening and coloring transform WCT (Whiten-Color Transform) in image feature extraction, and propose a new general style transfer method to better weigh the semantic information and style characteristics of content images and style images. Moreover, this paper use the self-attention mechanism to obtain the high-level semantic information of the image to make up for the missing details of the reconstructed image. Compared with the previous methods, the proposed method does not need to train for a certain feature map, and any style map can be used to transfer the style of the content map. Experimental results show that the model has good effects in terms of style transfer and artifact removal, and also prove that the method has good versatility.","PeriodicalId":127300,"journal":{"name":"2022 IEEE 6th Information Technology and Mechatronics Engineering Conference (ITOEC)","volume":"203 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"General style transfer model of wavelet network combined with self-attention mechanism\",\"authors\":\"Mingqiang Yan, Pengfei Yu, Haiyan Li, Hongsong Li\",\"doi\":\"10.1109/ITOEC53115.2022.9734489\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artistic style transfer refers to using two images (content image and style image) as a reference, preserving the content of the content image as much as possible and transferring the style characteristics of the style image to content image. Existing methods usually use various normalization techniques, but these techniques have limitations in completely transferring different textures to different spatial locations. The method based on self-attention has solved this problem and made some progress, but there are also unobvious image textures, resulting in unnecessary artifacts. There are also attempts to add a mask to fix the image layout for style transfer, but it is difficult to produce a coordinated result at the mask boundary. Someone tried to embed the wavelet network in the VGG network to obtain more detailed stylized images, and indeed achieved more visually pleasing results. To solve these problems, this paper attempts to combine the advantages of wavelet transform, self-attention mechanism, whitening and coloring transform WCT (Whiten-Color Transform) in image feature extraction, and propose a new general style transfer method to better weigh the semantic information and style characteristics of content images and style images. Moreover, this paper use the self-attention mechanism to obtain the high-level semantic information of the image to make up for the missing details of the reconstructed image. Compared with the previous methods, the proposed method does not need to train for a certain feature map, and any style map can be used to transfer the style of the content map. Experimental results show that the model has good effects in terms of style transfer and artifact removal, and also prove that the method has good versatility.\",\"PeriodicalId\":127300,\"journal\":{\"name\":\"2022 IEEE 6th Information Technology and Mechatronics Engineering Conference (ITOEC)\",\"volume\":\"203 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 6th Information Technology and Mechatronics Engineering Conference (ITOEC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ITOEC53115.2022.9734489\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 6th Information Technology and Mechatronics Engineering Conference (ITOEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITOEC53115.2022.9734489","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
艺术风格转移是指以两种图像(内容图像和风格图像)为参照,尽可能地保留内容图像的内容,将风格图像的风格特征转移到内容图像上。现有的方法通常使用各种归一化技术,但这些技术在将不同的纹理完全转移到不同的空间位置方面存在局限性。基于自注意的方法解决了这一问题,取得了一定的进展,但也存在图像纹理不明显的问题,造成不必要的伪影。也有人尝试添加蒙版来固定图像布局以进行样式转移,但在蒙版边界很难产生协调的结果。有人尝试在VGG网络中嵌入小波网络,以获得更细致的风格化图像,确实取得了更赏心悦目的视觉效果。为了解决这些问题,本文尝试结合小波变换、自关注机制、美白和着色变换WCT (white - color transform)在图像特征提取中的优势,提出一种新的通用风格转移方法,更好地权衡内容图像和样式图像的语义信息和风格特征。此外,本文利用自注意机制获取图像的高级语义信息,以弥补重构图像中缺失的细节。与以往的方法相比,该方法不需要对某一特征图进行训练,并且可以使用任何样式图来传递内容图的样式。实验结果表明,该模型在风格迁移和伪影去除方面取得了良好的效果,也证明了该方法具有良好的通用性。
General style transfer model of wavelet network combined with self-attention mechanism
Artistic style transfer refers to using two images (content image and style image) as a reference, preserving the content of the content image as much as possible and transferring the style characteristics of the style image to content image. Existing methods usually use various normalization techniques, but these techniques have limitations in completely transferring different textures to different spatial locations. The method based on self-attention has solved this problem and made some progress, but there are also unobvious image textures, resulting in unnecessary artifacts. There are also attempts to add a mask to fix the image layout for style transfer, but it is difficult to produce a coordinated result at the mask boundary. Someone tried to embed the wavelet network in the VGG network to obtain more detailed stylized images, and indeed achieved more visually pleasing results. To solve these problems, this paper attempts to combine the advantages of wavelet transform, self-attention mechanism, whitening and coloring transform WCT (Whiten-Color Transform) in image feature extraction, and propose a new general style transfer method to better weigh the semantic information and style characteristics of content images and style images. Moreover, this paper use the self-attention mechanism to obtain the high-level semantic information of the image to make up for the missing details of the reconstructed image. Compared with the previous methods, the proposed method does not need to train for a certain feature map, and any style map can be used to transfer the style of the content map. Experimental results show that the model has good effects in terms of style transfer and artifact removal, and also prove that the method has good versatility.