Single-Shot Implicit Morphable Faces with Consistent Texture Parameterization

Connor Z. Lin, Koki Nagano, J. Kautz, Eric Chan, Umar Iqbal, L. Guibas, Gordon Wetzstein, S. Khamis
{"title":"Single-Shot Implicit Morphable Faces with Consistent Texture Parameterization","authors":"Connor Z. Lin, Koki Nagano, J. Kautz, Eric Chan, Umar Iqbal, L. Guibas, Gordon Wetzstein, S. Khamis","doi":"10.1145/3588432.3591494","DOIUrl":null,"url":null,"abstract":"There is a growing demand for the accessible creation of high-quality 3D avatars that are animatable and customizable. Although 3D morphable models provide intuitive control for editing and animation, and robustness for single-view face reconstruction, they cannot easily capture geometric and appearance details. Methods based on neural implicit representations, such as signed distance functions (SDF) or neural radiance fields, approach photo-realism, but are difficult to animate and do not generalize well to unseen data. To tackle this problem, we propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing. Trained from a collection of high-quality 3D scans, our face model is parameterized by geometry, expression, and texture latent codes with a learned SDF and explicit UV texture parameterization. Once trained, we can reconstruct an avatar from a single in-the-wild image by leveraging the learned prior to project the image into the latent space of our model. Our implicit morphable face models can be used to render an avatar from novel views, animate facial expressions by modifying expression codes, and edit textures by directly painting on the learned UV-texture maps. We demonstrate quantitatively and qualitatively that our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.","PeriodicalId":280036,"journal":{"name":"ACM SIGGRAPH 2023 Conference Proceedings","volume":"92 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM SIGGRAPH 2023 Conference Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3588432.3591494","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

There is a growing demand for the accessible creation of high-quality 3D avatars that are animatable and customizable. Although 3D morphable models provide intuitive control for editing and animation, and robustness for single-view face reconstruction, they cannot easily capture geometric and appearance details. Methods based on neural implicit representations, such as signed distance functions (SDF) or neural radiance fields, approach photo-realism, but are difficult to animate and do not generalize well to unseen data. To tackle this problem, we propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing. Trained from a collection of high-quality 3D scans, our face model is parameterized by geometry, expression, and texture latent codes with a learned SDF and explicit UV texture parameterization. Once trained, we can reconstruct an avatar from a single in-the-wild image by leveraging the learned prior to project the image into the latent space of our model. Our implicit morphable face models can be used to render an avatar from novel views, animate facial expressions by modifying expression codes, and edit textures by directly painting on the learned UV-texture maps. We demonstrate quantitatively and qualitatively that our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
具有一致纹理参数化的单镜头隐式变形面
对于高质量的可动画化和可定制的3D化身的可访问创建的需求不断增长。虽然3D变形模型为编辑和动画提供了直观的控制,并为单视图面部重建提供了鲁棒性,但它们不能轻松捕获几何和外观细节。基于神经隐式表示的方法,如符号距离函数(SDF)或神经辐射场,接近照片真实感,但难以动画化,并且不能很好地推广到看不见的数据。为了解决这一问题,我们提出了一种新的方法来构建隐式的三维可变形面部模型,该模型具有可泛化和直观的编辑功能。从高质量的3D扫描集合中训练,我们的人脸模型通过几何、表情和纹理潜在代码进行参数化,并使用学习的SDF和显式UV纹理参数化。一旦训练完成,我们就可以在将图像投影到我们模型的潜在空间之前,利用学习到的知识,从单个野外图像中重建一个化身。我们的隐式可变形面部模型可用于从新视角渲染虚拟人物,通过修改表情代码使面部表情动画化,并通过直接在学习的uv纹理图上绘画来编辑纹理。我们在定量和定性上证明,与最先进的方法相比,我们的方法在照片真实感、几何形状和表达准确性方面有所提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信