Facial Geometric Detail Recovery via Implicit Representation

Xingyu Ren, Alexandros Lattas, Baris Gecer, Jiankang Deng, Chao Ma, Xiaokang Yang, S. Zafeiriou
{"title":"Facial Geometric Detail Recovery via Implicit Representation","authors":"Xingyu Ren, Alexandros Lattas, Baris Gecer, Jiankang Deng, Chao Ma, Xiaokang Yang, S. Zafeiriou","doi":"10.1109/FG57933.2023.10042505","DOIUrl":null,"url":null,"abstract":"Learning a dense 3D model with fine-scale details from a single facial image is highly challenging and ill-posed. To address this problem, many approaches fit smooth geometries through facial prior while learning details as additional displacement maps or personalized basis. However, these techniques typically require vast datasets of paired multi-view data or 3D scans, whereas such datasets are scarce and expensive. To alleviate heavy data dependency, we present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image. Specifically, we inpaint occluded facial parts, generate complete textures, and build an accurate multi-view dataset of the target subject. In order to estimate the detailed geometry, we define an implicit signed distance function and employ a physically-based implicit renderer to reconstruct fine geometric details from the generated multiview images. Our method not only recovers accurate facial details but also decomposes the diffuse and specular albedo, normals and shading components in a self-supervised way. Finally, we register the implicit shape details to a 3D Morphable Model template, which can be used in traditional modeling and rendering pipelines. Extensive experiments demonstrate that the proposed approach can reconstruct impressive facial details from a single image, especially when compared with state-of-the-art methods trained on large datasets.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FG57933.2023.10042505","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Learning a dense 3D model with fine-scale details from a single facial image is highly challenging and ill-posed. To address this problem, many approaches fit smooth geometries through facial prior while learning details as additional displacement maps or personalized basis. However, these techniques typically require vast datasets of paired multi-view data or 3D scans, whereas such datasets are scarce and expensive. To alleviate heavy data dependency, we present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image. Specifically, we inpaint occluded facial parts, generate complete textures, and build an accurate multi-view dataset of the target subject. In order to estimate the detailed geometry, we define an implicit signed distance function and employ a physically-based implicit renderer to reconstruct fine geometric details from the generated multiview images. Our method not only recovers accurate facial details but also decomposes the diffuse and specular albedo, normals and shading components in a self-supervised way. Finally, we register the implicit shape details to a 3D Morphable Model template, which can be used in traditional modeling and rendering pipelines. Extensive experiments demonstrate that the proposed approach can reconstruct impressive facial details from a single image, especially when compared with state-of-the-art methods trained on large datasets.
基于隐式表示的面部几何细节恢复
从单个面部图像中学习具有精细细节的密集3D模型是极具挑战性和病态的。为了解决这个问题,许多方法通过面部先验来拟合光滑几何形状,同时学习细节作为额外的位移图或个性化基础。然而,这些技术通常需要配对多视图数据或3D扫描的大量数据集,而这些数据集既稀缺又昂贵。为了减轻严重的数据依赖性,我们提出了一种鲁棒的纹理引导几何细节恢复方法,仅使用单个野外面部图像。具体来说,我们绘制被遮挡的面部部分,生成完整的纹理,并建立一个精确的目标主体的多视图数据集。为了估计详细的几何形状,我们定义了隐式带符号距离函数,并使用基于物理的隐式渲染器从生成的多视图图像中重建精细的几何细节。我们的方法不仅可以准确地恢复面部细节,而且可以自监督地分解漫反射反照率和镜面反照率、法线和阴影分量。最后,我们将隐式的形状细节注册到3D Morphable Model模板中,该模板可用于传统的建模和渲染管道。大量的实验表明,该方法可以从单个图像中重建令人印象深刻的面部细节,特别是与在大型数据集上训练的最先进的方法相比。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信