Xi Guo, Qiang Rao, Kun He, Fang Chen, Bing Yu, Bailan Feng, Jian Huang, Qin Yang
{"title":"MATGAN: Unified GANs for Multimodal Attribute Transfer by Coarse-to-Fine Disentangling Representations","authors":"Xi Guo, Qiang Rao, Kun He, Fang Chen, Bing Yu, Bailan Feng, Jian Huang, Qin Yang","doi":"10.1109/FAIML57028.2022.00030","DOIUrl":null,"url":null,"abstract":"Image attribute transfer aims to change an image to a target one with desired attributes. There are mainly two challenges for this task: multi-domain transfer and attribute-level multimodality. The first means editing multiple attributes using a single model and the second means diverse appearances for the target attribute. Existing methods cannot address the two problems simultaneously. Moreover, many works focus on image-level multimodality rather than attribute-level. In this paper, we propose a novel coarse-to-fine disentangling representation framework MATGAN to achieve Multimodal Attribute Transfer. In the coarse disentangling stage, we propose to embed images onto a content space and an attribute space for image-level multimodality. In the fine disentangling stage, we further disentangle the attribute space to bind with each attribute for attribute-level multimodal and multi-domain transfer. Extensive experiments demonstrate the effectiveness of our approach.","PeriodicalId":307172,"journal":{"name":"2022 International Conference on Frontiers of Artificial Intelligence and Machine Learning (FAIML)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Frontiers of Artificial Intelligence and Machine Learning (FAIML)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FAIML57028.2022.00030","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Image attribute transfer aims to change an image to a target one with desired attributes. There are mainly two challenges for this task: multi-domain transfer and attribute-level multimodality. The first means editing multiple attributes using a single model and the second means diverse appearances for the target attribute. Existing methods cannot address the two problems simultaneously. Moreover, many works focus on image-level multimodality rather than attribute-level. In this paper, we propose a novel coarse-to-fine disentangling representation framework MATGAN to achieve Multimodal Attribute Transfer. In the coarse disentangling stage, we propose to embed images onto a content space and an attribute space for image-level multimodality. In the fine disentangling stage, we further disentangle the attribute space to bind with each attribute for attribute-level multimodal and multi-domain transfer. Extensive experiments demonstrate the effectiveness of our approach.