{"title":"A Facial Motion Retargeting Pipeline for Appearance Agnostic 3D Characters","authors":"ChangAn Zhu, Chris Joslin","doi":"10.1002/cav.70001","DOIUrl":null,"url":null,"abstract":"<p>3D facial motion retargeting has the advantage of capturing and recreating the nuances of human facial motions and speeding up the time-consuming 3D facial animation process. However, the facial motion retargeting pipeline is limited in reflecting the facial motion's semantic information (i.e., meaning and intensity), especially when applied to nonhuman characters. The retargeting quality heavily relies on the target face rig, which requires time-consuming preparation such as 3D scanning of human faces and modeling of blendshapes. In this paper, we propose a facial motion retargeting pipeline aiming to provide fast and semantically accurate retargeting results for diverse characters. The new framework comprises a target face parameterization module based on face anatomy and a compatible source motion interpretation module. From the quantitative and qualitative evaluations, we found that the proposed retargeting pipeline can naturally recreate the expressions performed by a motion capture subject in equivalent meanings and intensities, such semantic accuracy extends to the faces of nonhuman characters without labor-demanding preparations.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.70001","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Animation and Virtual Worlds","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cav.70001","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
3D facial motion retargeting has the advantage of capturing and recreating the nuances of human facial motions and speeding up the time-consuming 3D facial animation process. However, the facial motion retargeting pipeline is limited in reflecting the facial motion's semantic information (i.e., meaning and intensity), especially when applied to nonhuman characters. The retargeting quality heavily relies on the target face rig, which requires time-consuming preparation such as 3D scanning of human faces and modeling of blendshapes. In this paper, we propose a facial motion retargeting pipeline aiming to provide fast and semantically accurate retargeting results for diverse characters. The new framework comprises a target face parameterization module based on face anatomy and a compatible source motion interpretation module. From the quantitative and qualitative evaluations, we found that the proposed retargeting pipeline can naturally recreate the expressions performed by a motion capture subject in equivalent meanings and intensities, such semantic accuracy extends to the faces of nonhuman characters without labor-demanding preparations.
期刊介绍:
With the advent of very powerful PCs and high-end graphics cards, there has been an incredible development in Virtual Worlds, real-time computer animation and simulation, games. But at the same time, new and cheaper Virtual Reality devices have appeared allowing an interaction with these real-time Virtual Worlds and even with real worlds through Augmented Reality. Three-dimensional characters, especially Virtual Humans are now of an exceptional quality, which allows to use them in the movie industry. But this is only a beginning, as with the development of Artificial Intelligence and Agent technology, these characters will become more and more autonomous and even intelligent. They will inhabit the Virtual Worlds in a Virtual Life together with animals and plants.