Real time technologies for Realistic Digital Humans: facial performance and hair simulation

Krasimir Nechevski, Mark Schoennagel
{"title":"Real time technologies for Realistic Digital Humans: facial performance and hair simulation","authors":"Krasimir Nechevski, Mark Schoennagel","doi":"10.1145/3550453.3570122","DOIUrl":null,"url":null,"abstract":"We have identified the world's most extensive parameter space for the human head based on over 4TB of 4D data acquired from multiple actors. Ziva's proprietary machine-learning processes can apply this data set to any number of secondary 3D heads, enabling them all to perform novel facial expressions in real-time while preserving volume and staying within the natural range of human expressions. Facial performances can then be augmented and tailored with Ziva expressions controls, solving the costly limitations of scalability, realism, artist control, and speed. For this presentation, we will discuss and demonstrate how this innovation can improve the overall quality of RT3D faces for all productions while simplifying and accelerating the overall production workflow and enabling mass production of high-performance real-time characters. We will then illustrate how performance capture can be decoupled from asset production, enabling actor-nonspecific performance capture, by showing a single performance being applied to multiple faces of varying proportions, enabling any performance to run on any head, all at state-of-the-art quality. We will additionally highlight a new integrated Hair solution for authoring / importing / simulating/ rendering strand-based hair in Unity. Built from the ground up with Unity users in mind, and evolved and hardened during the production of Enemies, the hair system is applicable not only to realistic digital humans, but also to much more stylized content and games. Using a fast and flexible GPU-based solver that works on both strand- and volume-information, the system enables users to interactively set up 'Hair Instances' and interact with those instances as they are simulated and rendered in real time. We will concentrate on demonstrating the simulation part of the system, including the strand-based solver, volume-based quantities such as density and pressure, the fully configurable set of constraints and the level of detail support that artists have.","PeriodicalId":423970,"journal":{"name":"Proceedings of the SIGGRAPH Asia 2022 Real-Time Live!","volume":"86 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the SIGGRAPH Asia 2022 Real-Time Live!","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3550453.3570122","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We have identified the world's most extensive parameter space for the human head based on over 4TB of 4D data acquired from multiple actors. Ziva's proprietary machine-learning processes can apply this data set to any number of secondary 3D heads, enabling them all to perform novel facial expressions in real-time while preserving volume and staying within the natural range of human expressions. Facial performances can then be augmented and tailored with Ziva expressions controls, solving the costly limitations of scalability, realism, artist control, and speed. For this presentation, we will discuss and demonstrate how this innovation can improve the overall quality of RT3D faces for all productions while simplifying and accelerating the overall production workflow and enabling mass production of high-performance real-time characters. We will then illustrate how performance capture can be decoupled from asset production, enabling actor-nonspecific performance capture, by showing a single performance being applied to multiple faces of varying proportions, enabling any performance to run on any head, all at state-of-the-art quality. We will additionally highlight a new integrated Hair solution for authoring / importing / simulating/ rendering strand-based hair in Unity. Built from the ground up with Unity users in mind, and evolved and hardened during the production of Enemies, the hair system is applicable not only to realistic digital humans, but also to much more stylized content and games. Using a fast and flexible GPU-based solver that works on both strand- and volume-information, the system enables users to interactively set up 'Hair Instances' and interact with those instances as they are simulated and rendered in real time. We will concentrate on demonstrating the simulation part of the system, including the strand-based solver, volume-based quantities such as density and pressure, the fully configurable set of constraints and the level of detail support that artists have.
现实数字人的实时技术:面部表现和头发模拟
基于从多个参与者获得的超过4TB的4D数据,我们已经确定了世界上最广泛的人类头部参数空间。Ziva的专有机器学习流程可以将这些数据集应用于任意数量的次级3D头部,使它们能够实时执行新颖的面部表情,同时保持体积并保持在人类表情的自然范围内。然后可以使用Ziva表情控制来增强和定制面部表演,从而解决可扩展性,真实感,艺术家控制和速度的昂贵限制。在本次演讲中,我们将讨论并演示这一创新如何提高所有产品的RT3D面部的整体质量,同时简化和加速整体生产工作流程,并实现高性能实时角色的大规模生产。然后,我们将说明性能捕获如何从资产生产中解耦,通过显示单个性能应用于不同比例的多个面孔,使任何性能在任何头部上运行,都以最先进的质量实现演员非特定性能捕获。我们还将重点介绍一个新的集成头发解决方案,用于在Unity中创作/导入/模拟/渲染基于头发的头发。从Unity用户的角度出发,在制作《敌人》的过程中不断发展和强化,头发系统不仅适用于现实的数字人物,也适用于更程式化的内容和游戏。该系统使用快速灵活的基于gpu的求解器,可处理链和卷信息,使用户能够交互式地设置“毛发实例”,并在实时模拟和渲染时与这些实例进行交互。我们将专注于演示系统的仿真部分,包括基于链的求解器,基于体积的数量,如密度和压力,完全可配置的约束集和艺术家拥有的细节支持水平。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信