Zhengfu Peng, Ting Lu, Zhaowen Chen, Xiangmin Xu, Shu-Min Lin
{"title":"HMD遮挡下的人脸重建","authors":"Zhengfu Peng, Ting Lu, Zhaowen Chen, Xiangmin Xu, Shu-Min Lin","doi":"10.1109/VR.2019.8797959","DOIUrl":null,"url":null,"abstract":"With the help of existing augmented vision perception motion capture technologies, virtual reality (VR) can make users immerse in virtual environments. But users are difficult to convey their actual emotions to others in virtual environments. Since the head-mounted displays (HMDs) significantly obstruct users face, it is hard to recover the full face directly with traditional techniques. In this paper, we introduce a novel method to address this problem by only using the RGB image of a person, without the need of any other sensors or devices. Firstly, we utilize the facial landmark points to estimate the face shape, expression and pose of the user. Then with the information of the Non occlusion face area, we could recover the face texture and the illumination of the current scene.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Human Face Reconstruction under a HMD Occlusion\",\"authors\":\"Zhengfu Peng, Ting Lu, Zhaowen Chen, Xiangmin Xu, Shu-Min Lin\",\"doi\":\"10.1109/VR.2019.8797959\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the help of existing augmented vision perception motion capture technologies, virtual reality (VR) can make users immerse in virtual environments. But users are difficult to convey their actual emotions to others in virtual environments. Since the head-mounted displays (HMDs) significantly obstruct users face, it is hard to recover the full face directly with traditional techniques. In this paper, we introduce a novel method to address this problem by only using the RGB image of a person, without the need of any other sensors or devices. Firstly, we utilize the facial landmark points to estimate the face shape, expression and pose of the user. Then with the information of the Non occlusion face area, we could recover the face texture and the illumination of the current scene.\",\"PeriodicalId\":315935,\"journal\":{\"name\":\"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-03-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/VR.2019.8797959\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VR.2019.8797959","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
With the help of existing augmented vision perception motion capture technologies, virtual reality (VR) can make users immerse in virtual environments. But users are difficult to convey their actual emotions to others in virtual environments. Since the head-mounted displays (HMDs) significantly obstruct users face, it is hard to recover the full face directly with traditional techniques. In this paper, we introduce a novel method to address this problem by only using the RGB image of a person, without the need of any other sensors or devices. Firstly, we utilize the facial landmark points to estimate the face shape, expression and pose of the user. Then with the information of the Non occlusion face area, we could recover the face texture and the illumination of the current scene.