{"title":"Face 2D to 3D Reconstruction Network Based on Head Pose and 3D Facial Landmarks","authors":"Yuanquan Xu, Cheolkon Jung","doi":"10.1109/VCIP53242.2021.9675325","DOIUrl":null,"url":null,"abstract":"Although most existing methods based on 3D mor-phable model (3DMM) need annotated parameters for training as ground truth, only a few datasets contain them. Moreover, it is difficult to acquire accurate 3D face models aligned with the input images due to the gap in dimensions. In this paper, we propose a face 2D to 3D reconstruction network based on head pose and 3D facial landmarks. We build a head pose guided face reconstruction network to regress an accurate 3D face model with the help of 3D facial landmarks. Different from 3DMM parameters, head pose and 3D facial landmarks are successfully estimated even in the wild images. Experiments on 300W-LP, AFLW2000-3D and CelebA HQ datasets show that the proposed method successfully reconstructs 3D face model from a single RGB image thanks to 3D facial landmarks as well as achieves state-of-the-art performance in terms of the normalized mean error (NME).","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Visual Communications and Image Processing (VCIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VCIP53242.2021.9675325","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Although most existing methods based on 3D mor-phable model (3DMM) need annotated parameters for training as ground truth, only a few datasets contain them. Moreover, it is difficult to acquire accurate 3D face models aligned with the input images due to the gap in dimensions. In this paper, we propose a face 2D to 3D reconstruction network based on head pose and 3D facial landmarks. We build a head pose guided face reconstruction network to regress an accurate 3D face model with the help of 3D facial landmarks. Different from 3DMM parameters, head pose and 3D facial landmarks are successfully estimated even in the wild images. Experiments on 300W-LP, AFLW2000-3D and CelebA HQ datasets show that the proposed method successfully reconstructs 3D face model from a single RGB image thanks to 3D facial landmarks as well as achieves state-of-the-art performance in terms of the normalized mean error (NME).