Yun Zhang, R. Liu, Yifan Pan, Dehao Wu, Yuesheng Zhu, Zhiqiang Bai
{"title":"基于GAN反演的面部表情关注嵌入网络","authors":"Yun Zhang, R. Liu, Yifan Pan, Dehao Wu, Yuesheng Zhu, Zhiqiang Bai","doi":"10.1109/ICIP42928.2021.9506434","DOIUrl":null,"url":null,"abstract":"Facial expression editing aims to modify facial expression by specific conditions. Existing methods adopt an encoder-decoder architecture under the guidance of expression condition to process the desired expression. However, these methods always tend to produce artifacts and blurs in expression-intensive regions due to simultaneously modifying images in expression changed regions and ensuring the consistency of other attributes with the source image. To address these issues, we propose a GAN inversion based Attentive Expression Embedding Network (GI-AEE) for facial expression editing, which decouples this task utilizing GAN inversion to alleviate the strong effect of the source image on the target image and produces high-quality expression editing results. Furthermore, different from existing methods that directly embed the expression condition into the network, we propose an Attentive Expression Embedding module to embed corresponding expression vectors into different facial regions, producing more plausible results. Qualitative and quantitative experiments demonstrate our method outperforms the state-of-the-art expression editing methods.","PeriodicalId":314429,"journal":{"name":"2021 IEEE International Conference on Image Processing (ICIP)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"GI-AEE: GAN Inversion Based Attentive Expression Embedding Network For Facial Expression Editing\",\"authors\":\"Yun Zhang, R. Liu, Yifan Pan, Dehao Wu, Yuesheng Zhu, Zhiqiang Bai\",\"doi\":\"10.1109/ICIP42928.2021.9506434\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Facial expression editing aims to modify facial expression by specific conditions. Existing methods adopt an encoder-decoder architecture under the guidance of expression condition to process the desired expression. However, these methods always tend to produce artifacts and blurs in expression-intensive regions due to simultaneously modifying images in expression changed regions and ensuring the consistency of other attributes with the source image. To address these issues, we propose a GAN inversion based Attentive Expression Embedding Network (GI-AEE) for facial expression editing, which decouples this task utilizing GAN inversion to alleviate the strong effect of the source image on the target image and produces high-quality expression editing results. Furthermore, different from existing methods that directly embed the expression condition into the network, we propose an Attentive Expression Embedding module to embed corresponding expression vectors into different facial regions, producing more plausible results. Qualitative and quantitative experiments demonstrate our method outperforms the state-of-the-art expression editing methods.\",\"PeriodicalId\":314429,\"journal\":{\"name\":\"2021 IEEE International Conference on Image Processing (ICIP)\",\"volume\":\"94 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Image Processing (ICIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIP42928.2021.9506434\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP42928.2021.9506434","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
GI-AEE: GAN Inversion Based Attentive Expression Embedding Network For Facial Expression Editing
Facial expression editing aims to modify facial expression by specific conditions. Existing methods adopt an encoder-decoder architecture under the guidance of expression condition to process the desired expression. However, these methods always tend to produce artifacts and blurs in expression-intensive regions due to simultaneously modifying images in expression changed regions and ensuring the consistency of other attributes with the source image. To address these issues, we propose a GAN inversion based Attentive Expression Embedding Network (GI-AEE) for facial expression editing, which decouples this task utilizing GAN inversion to alleviate the strong effect of the source image on the target image and produces high-quality expression editing results. Furthermore, different from existing methods that directly embed the expression condition into the network, we propose an Attentive Expression Embedding module to embed corresponding expression vectors into different facial regions, producing more plausible results. Qualitative and quantitative experiments demonstrate our method outperforms the state-of-the-art expression editing methods.