{"title":"Multi-task ConvNet for blind face inpainting with application to face verification","authors":"Shu Zhang, R. He, Zhenan Sun, T. Tan","doi":"10.1109/ICB.2016.7550058","DOIUrl":null,"url":null,"abstract":"Face verification between ID photos and life photos (FVBIL) is gaining traction with the rapid development of the Internet. However, ID photos provided by the Chinese administration center are often corrupted with wavy lines to prevent misuse, which poses great difficulty to accurate FVBIL. Therefore, this paper tries to improve the verification performance by studying a new problem, i.e. blind face inpainting, where we aim at restoring clean face images from the corrupted ID photos. The term blind indicates that the locations of corruptions are not known in advance. We formulate blind face inpainting as a joint detection and reconstruction problem. A multi-task ConvNet is accordingly developed to facilitate end to end network training for accurate and fast inpainting. The ConvNet is used to (i) regress the residual values between the clean/corrupted ID photo pairs and (ii) predict the positions of residual regions. Moreover, to achieve better inpainting results, we employ a skip connection to fuse information in the intermediate layer. To enable training of our ConvNet, we collect a dataset of synthetic clean/corrupted ID photo pairs with 500 thousand samples from around 10 thousand individuals. Experiments demonstrate that our multi-task ConvNet achieves superior performance in terms of reconstruction errors, convergence speed and verification accuracy.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"24","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 International Conference on Biometrics (ICB)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICB.2016.7550058","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 24
Abstract
Face verification between ID photos and life photos (FVBIL) is gaining traction with the rapid development of the Internet. However, ID photos provided by the Chinese administration center are often corrupted with wavy lines to prevent misuse, which poses great difficulty to accurate FVBIL. Therefore, this paper tries to improve the verification performance by studying a new problem, i.e. blind face inpainting, where we aim at restoring clean face images from the corrupted ID photos. The term blind indicates that the locations of corruptions are not known in advance. We formulate blind face inpainting as a joint detection and reconstruction problem. A multi-task ConvNet is accordingly developed to facilitate end to end network training for accurate and fast inpainting. The ConvNet is used to (i) regress the residual values between the clean/corrupted ID photo pairs and (ii) predict the positions of residual regions. Moreover, to achieve better inpainting results, we employ a skip connection to fuse information in the intermediate layer. To enable training of our ConvNet, we collect a dataset of synthetic clean/corrupted ID photo pairs with 500 thousand samples from around 10 thousand individuals. Experiments demonstrate that our multi-task ConvNet achieves superior performance in terms of reconstruction errors, convergence speed and verification accuracy.