{"title":"FaST-Net: Face Style Self-Transfer Network for masked face inpainting","authors":"Yiming Li, Jianpeng Chen, Yazhou Ren, X. Pu","doi":"10.1117/12.2673410","DOIUrl":null,"url":null,"abstract":"During the COVID-19 coronavirus epidemic, people usually wear masks to prevent the spread of the virus, which has become a major obstacle when we use face-based computer vision techniques such as face recognition and face detection. So masked face inpainting technique is desired. Actually, the distribution of face features is strongly correlated with each other, but existing inpainting methods typically ignore the relationship between face feature distributions. To address this issue, in this paper, we first show that the face image inpainting task can be seen as a distribution alignment between face features in damaged and valid regions, and style transfer is a distribution alignment process. Based on this theory, we propose a novel face inpainting model considering the probability distribution between face features, namely Face Style Self-Transfer Network (FaST-Net). Through the proposed style self-transfer mechanism, FaST-Net can align the style distribution of features in the inpainting region with the style distribution of features in the valid region of a face. Ablation studies have validated the effectiveness of FaST-Net, and experimental results on two popular human face datasets (CelebA and VGGFace) exhibit its superior performance compared with existing state-of-the-art methods.","PeriodicalId":176918,"journal":{"name":"2nd International Conference on Digital Society and Intelligent Systems (DSInS 2022)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2nd International Conference on Digital Society and Intelligent Systems (DSInS 2022)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2673410","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
During the COVID-19 coronavirus epidemic, people usually wear masks to prevent the spread of the virus, which has become a major obstacle when we use face-based computer vision techniques such as face recognition and face detection. So masked face inpainting technique is desired. Actually, the distribution of face features is strongly correlated with each other, but existing inpainting methods typically ignore the relationship between face feature distributions. To address this issue, in this paper, we first show that the face image inpainting task can be seen as a distribution alignment between face features in damaged and valid regions, and style transfer is a distribution alignment process. Based on this theory, we propose a novel face inpainting model considering the probability distribution between face features, namely Face Style Self-Transfer Network (FaST-Net). Through the proposed style self-transfer mechanism, FaST-Net can align the style distribution of features in the inpainting region with the style distribution of features in the valid region of a face. Ablation studies have validated the effectiveness of FaST-Net, and experimental results on two popular human face datasets (CelebA and VGGFace) exhibit its superior performance compared with existing state-of-the-art methods.