{"title":"基于局部特征变换模块的无监督双向风格转移网络","authors":"K. Bae, Hyungil Kim, Y. Kwon, Jinyoung Moon","doi":"10.1109/CVPRW59228.2023.00081","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a bidirectional style transfer method by exchanging the style of inputs while preserving the structural information. The proposed bidirectional style transfer network consists of three modules: 1) content and style extraction module that extracts the structure and style-related features, 2) local feature transform module that aligns locally extracted feature to its original coordinate, and 3) reconstruction module that generates a newly stylized image. Given two input images, we extract content and style information from both images in a global and local manner, respectively. Note that the content extraction module removes style-related information by compressing the dimension of the feature tensor to a single channel. The style extraction module removes content information by gradually reducing the spatial size of a feature tensor. The local feature transform module exchanges the style information and spatially transforms the local features to its original location. By substituting the style information with one another in both ways (i.e., global and local) bidirectionally, the reconstruction module generates a newly stylized image without diminishing the core structure. Furthermore, we enable the proposed network to control the degree of style to be applied when exchanging the style of inputs bidirectionally. Through the experiments, we compare the bidirectionally style transferred results with existing methods quantitatively and qualitatively. We show generation results by controlling the degree of applied style and adopting various textures to an identical structure.","PeriodicalId":355438,"journal":{"name":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Unsupervised Bidirectional Style Transfer Network using Local Feature Transform Module\",\"authors\":\"K. Bae, Hyungil Kim, Y. Kwon, Jinyoung Moon\",\"doi\":\"10.1109/CVPRW59228.2023.00081\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we propose a bidirectional style transfer method by exchanging the style of inputs while preserving the structural information. The proposed bidirectional style transfer network consists of three modules: 1) content and style extraction module that extracts the structure and style-related features, 2) local feature transform module that aligns locally extracted feature to its original coordinate, and 3) reconstruction module that generates a newly stylized image. Given two input images, we extract content and style information from both images in a global and local manner, respectively. Note that the content extraction module removes style-related information by compressing the dimension of the feature tensor to a single channel. The style extraction module removes content information by gradually reducing the spatial size of a feature tensor. The local feature transform module exchanges the style information and spatially transforms the local features to its original location. By substituting the style information with one another in both ways (i.e., global and local) bidirectionally, the reconstruction module generates a newly stylized image without diminishing the core structure. Furthermore, we enable the proposed network to control the degree of style to be applied when exchanging the style of inputs bidirectionally. Through the experiments, we compare the bidirectionally style transferred results with existing methods quantitatively and qualitatively. We show generation results by controlling the degree of applied style and adopting various textures to an identical structure.\",\"PeriodicalId\":355438,\"journal\":{\"name\":\"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CVPRW59228.2023.00081\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPRW59228.2023.00081","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Unsupervised Bidirectional Style Transfer Network using Local Feature Transform Module
In this paper, we propose a bidirectional style transfer method by exchanging the style of inputs while preserving the structural information. The proposed bidirectional style transfer network consists of three modules: 1) content and style extraction module that extracts the structure and style-related features, 2) local feature transform module that aligns locally extracted feature to its original coordinate, and 3) reconstruction module that generates a newly stylized image. Given two input images, we extract content and style information from both images in a global and local manner, respectively. Note that the content extraction module removes style-related information by compressing the dimension of the feature tensor to a single channel. The style extraction module removes content information by gradually reducing the spatial size of a feature tensor. The local feature transform module exchanges the style information and spatially transforms the local features to its original location. By substituting the style information with one another in both ways (i.e., global and local) bidirectionally, the reconstruction module generates a newly stylized image without diminishing the core structure. Furthermore, we enable the proposed network to control the degree of style to be applied when exchanging the style of inputs bidirectionally. Through the experiments, we compare the bidirectionally style transferred results with existing methods quantitatively and qualitatively. We show generation results by controlling the degree of applied style and adopting various textures to an identical structure.