{"title":"Multi-head Mutual Self-attention Generative Adversarial Network for Texture Synthesis","authors":"Shasha Xie, Wenhua Qian","doi":"10.1109/ICSP54964.2022.9778480","DOIUrl":null,"url":null,"abstract":"Example-based texture synthesis requires synthesizing textures that are as similar as possible to the exemplar. However, for complex texture patterns, the existing methods lead to wrong synthesis results due to insufficient feature extraction capabilities. To address this problem, this paper proposed an optimized generative adversarial network model to address the quality issues such as low resolution and insufficient detail in texture synthesis. To this end, we propose a new multi-head mutual self-attention (MHMSA) mechanism. Different from the self-attention, MHMSA is to model the mutual relationship of each position in the feature space, and clues from all feature positions can be used to generate details. Therefore, embedding the MHMSA into the generator can help to improve its ability to extract detailed features and global features. Experimental results show that the proposed model significantly improves the visual quality of texture synthesis images, and demonstrates that MHMSA outperforms self-attention in the image generation task.","PeriodicalId":363766,"journal":{"name":"2022 7th International Conference on Intelligent Computing and Signal Processing (ICSP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 7th International Conference on Intelligent Computing and Signal Processing (ICSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSP54964.2022.9778480","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Example-based texture synthesis requires synthesizing textures that are as similar as possible to the exemplar. However, for complex texture patterns, the existing methods lead to wrong synthesis results due to insufficient feature extraction capabilities. To address this problem, this paper proposed an optimized generative adversarial network model to address the quality issues such as low resolution and insufficient detail in texture synthesis. To this end, we propose a new multi-head mutual self-attention (MHMSA) mechanism. Different from the self-attention, MHMSA is to model the mutual relationship of each position in the feature space, and clues from all feature positions can be used to generate details. Therefore, embedding the MHMSA into the generator can help to improve its ability to extract detailed features and global features. Experimental results show that the proposed model significantly improves the visual quality of texture synthesis images, and demonstrates that MHMSA outperforms self-attention in the image generation task.