Neural 3D Face Shape Stylization Based on Single Style Template via Weakly Supervised Learning.

Peizhi Yan, Rabab K Ward, Qiang Tang, Shan Du
{"title":"Neural 3D Face Shape Stylization Based on Single Style Template via Weakly Supervised Learning.","authors":"Peizhi Yan, Rabab K Ward, Qiang Tang, Shan Du","doi":"10.1109/TVCG.2025.3573690","DOIUrl":null,"url":null,"abstract":"<p><p>3D Face shape stylization refers to transforming a realistic 3D face shape into a different style, such as a cartoon face style. To solve this problem, this paper proposes modeling this task as a deformation transfer problem. This approach significantly reduces labor costs, as the artists would only need to create a single template for each face style. Realistic facial features of the original 3D face e.g. the nose or chin shape, would thus be automatically transferred to those in the style template. Deformation transfer methods, however, have two drawbacks. They are slow and they require re-optimization for every new input face. To address these weaknesses, we propose a neural network-based 3D face shape stylization method. This method is trained through weakly supervised learning, and its template's structure is preserved using our novel templateguided mesh smoothing regularization. Our method is the first learning-based deformation transfer method for 3D face shape stylization. Its employment offers the useful and practical benefit of not requiring paired training data. The experiments show that the quality of the stylized faces obtained by our method is comparable to that of the traditional deformation transfer method, achieving an average Chamfer Distance of approximately 0.01mm. However, our approach significantly boosts the processing speed, achieving a rate approximately 3,000 times faster than the traditional deformation transfer. Project page: https://peizhiyan.github.io/docs/style.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TVCG.2025.3573690","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

3D Face shape stylization refers to transforming a realistic 3D face shape into a different style, such as a cartoon face style. To solve this problem, this paper proposes modeling this task as a deformation transfer problem. This approach significantly reduces labor costs, as the artists would only need to create a single template for each face style. Realistic facial features of the original 3D face e.g. the nose or chin shape, would thus be automatically transferred to those in the style template. Deformation transfer methods, however, have two drawbacks. They are slow and they require re-optimization for every new input face. To address these weaknesses, we propose a neural network-based 3D face shape stylization method. This method is trained through weakly supervised learning, and its template's structure is preserved using our novel templateguided mesh smoothing regularization. Our method is the first learning-based deformation transfer method for 3D face shape stylization. Its employment offers the useful and practical benefit of not requiring paired training data. The experiments show that the quality of the stylized faces obtained by our method is comparable to that of the traditional deformation transfer method, achieving an average Chamfer Distance of approximately 0.01mm. However, our approach significantly boosts the processing speed, achieving a rate approximately 3,000 times faster than the traditional deformation transfer. Project page: https://peizhiyan.github.io/docs/style.

基于弱监督学习的单样式模板神经网络三维脸型风格化。
3D脸型风格化是指将逼真的3D脸型转换成不同的风格,如卡通脸型。为了解决这一问题,本文提出将该任务建模为变形传递问题。这种方法大大降低了人工成本,因为艺术家只需要为每种面部风格创建一个模板。因此,原始3D人脸的真实面部特征,例如鼻子或下巴形状,将自动转移到样式模板中的面部特征。然而,变形传递方法有两个缺点。它们很慢,并且需要对每个新的输入面进行重新优化。为了解决这些缺点,我们提出了一种基于神经网络的三维人脸形状程式化方法。该方法通过弱监督学习进行训练,并使用我们新颖的模板引导网格平滑正则化来保持其模板结构。我们的方法是第一个基于学习的三维人脸造型的变形传递方法。它的使用提供了不需要成对训练数据的有用和实际的好处。实验表明,该方法获得的程式化面质量与传统的变形传递方法相当,平均倒角距离约为0.01mm。然而,我们的方法显著提高了加工速度,实现了比传统变形转移快3000倍的速度。项目页面:https://peizhiyan.github.io/docs/style。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信