{"title":"现实数字人的实时技术:面部表现和头发模拟","authors":"Krasimir Nechevski, Mark Schoennagel","doi":"10.1145/3550453.3570122","DOIUrl":null,"url":null,"abstract":"We have identified the world's most extensive parameter space for the human head based on over 4TB of 4D data acquired from multiple actors. Ziva's proprietary machine-learning processes can apply this data set to any number of secondary 3D heads, enabling them all to perform novel facial expressions in real-time while preserving volume and staying within the natural range of human expressions. Facial performances can then be augmented and tailored with Ziva expressions controls, solving the costly limitations of scalability, realism, artist control, and speed. For this presentation, we will discuss and demonstrate how this innovation can improve the overall quality of RT3D faces for all productions while simplifying and accelerating the overall production workflow and enabling mass production of high-performance real-time characters. We will then illustrate how performance capture can be decoupled from asset production, enabling actor-nonspecific performance capture, by showing a single performance being applied to multiple faces of varying proportions, enabling any performance to run on any head, all at state-of-the-art quality. We will additionally highlight a new integrated Hair solution for authoring / importing / simulating/ rendering strand-based hair in Unity. Built from the ground up with Unity users in mind, and evolved and hardened during the production of Enemies, the hair system is applicable not only to realistic digital humans, but also to much more stylized content and games. Using a fast and flexible GPU-based solver that works on both strand- and volume-information, the system enables users to interactively set up 'Hair Instances' and interact with those instances as they are simulated and rendered in real time. We will concentrate on demonstrating the simulation part of the system, including the strand-based solver, volume-based quantities such as density and pressure, the fully configurable set of constraints and the level of detail support that artists have.","PeriodicalId":423970,"journal":{"name":"Proceedings of the SIGGRAPH Asia 2022 Real-Time Live!","volume":"86 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Real time technologies for Realistic Digital Humans: facial performance and hair simulation\",\"authors\":\"Krasimir Nechevski, Mark Schoennagel\",\"doi\":\"10.1145/3550453.3570122\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We have identified the world's most extensive parameter space for the human head based on over 4TB of 4D data acquired from multiple actors. Ziva's proprietary machine-learning processes can apply this data set to any number of secondary 3D heads, enabling them all to perform novel facial expressions in real-time while preserving volume and staying within the natural range of human expressions. Facial performances can then be augmented and tailored with Ziva expressions controls, solving the costly limitations of scalability, realism, artist control, and speed. For this presentation, we will discuss and demonstrate how this innovation can improve the overall quality of RT3D faces for all productions while simplifying and accelerating the overall production workflow and enabling mass production of high-performance real-time characters. We will then illustrate how performance capture can be decoupled from asset production, enabling actor-nonspecific performance capture, by showing a single performance being applied to multiple faces of varying proportions, enabling any performance to run on any head, all at state-of-the-art quality. We will additionally highlight a new integrated Hair solution for authoring / importing / simulating/ rendering strand-based hair in Unity. Built from the ground up with Unity users in mind, and evolved and hardened during the production of Enemies, the hair system is applicable not only to realistic digital humans, but also to much more stylized content and games. Using a fast and flexible GPU-based solver that works on both strand- and volume-information, the system enables users to interactively set up 'Hair Instances' and interact with those instances as they are simulated and rendered in real time. We will concentrate on demonstrating the simulation part of the system, including the strand-based solver, volume-based quantities such as density and pressure, the fully configurable set of constraints and the level of detail support that artists have.\",\"PeriodicalId\":423970,\"journal\":{\"name\":\"Proceedings of the SIGGRAPH Asia 2022 Real-Time Live!\",\"volume\":\"86 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the SIGGRAPH Asia 2022 Real-Time Live!\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3550453.3570122\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the SIGGRAPH Asia 2022 Real-Time Live!","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3550453.3570122","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Real time technologies for Realistic Digital Humans: facial performance and hair simulation
We have identified the world's most extensive parameter space for the human head based on over 4TB of 4D data acquired from multiple actors. Ziva's proprietary machine-learning processes can apply this data set to any number of secondary 3D heads, enabling them all to perform novel facial expressions in real-time while preserving volume and staying within the natural range of human expressions. Facial performances can then be augmented and tailored with Ziva expressions controls, solving the costly limitations of scalability, realism, artist control, and speed. For this presentation, we will discuss and demonstrate how this innovation can improve the overall quality of RT3D faces for all productions while simplifying and accelerating the overall production workflow and enabling mass production of high-performance real-time characters. We will then illustrate how performance capture can be decoupled from asset production, enabling actor-nonspecific performance capture, by showing a single performance being applied to multiple faces of varying proportions, enabling any performance to run on any head, all at state-of-the-art quality. We will additionally highlight a new integrated Hair solution for authoring / importing / simulating/ rendering strand-based hair in Unity. Built from the ground up with Unity users in mind, and evolved and hardened during the production of Enemies, the hair system is applicable not only to realistic digital humans, but also to much more stylized content and games. Using a fast and flexible GPU-based solver that works on both strand- and volume-information, the system enables users to interactively set up 'Hair Instances' and interact with those instances as they are simulated and rendered in real time. We will concentrate on demonstrating the simulation part of the system, including the strand-based solver, volume-based quantities such as density and pressure, the fully configurable set of constraints and the level of detail support that artists have.