{"title":"Synthesizing Performance-driven Facial Animation","authors":"Chang-Wei LUO , Jun YU , Zeng-Fu WANG","doi":"10.1016/S1874-1029(14)60361-X","DOIUrl":null,"url":null,"abstract":"<div><p>In this paper, we present a system for real-time performance-driven facial animation. With the system, the user can control the facial expression of a digital character by acting out the desired facial action in front of an ordinary camera. First, we create a muscle-based 3D face model. The muscle actuation parameters are used to animate the face model. To increase the reality of facial animation, the orbicularis oris in our face model is divided into the inner part and outer part. We also establish the relationship between jaw rotation and facial surface deformation. Second, a real-time facial tracking method is employed to track the facial features of a performer in the video. Finally, the tracked facial feature points are used to estimate muscle actuation parameters to drive the face model. Experimental results show that our system runs in real time and outputs realistic facial animations. Compared with most existing performance-based facial animation systems, ours does not require facial markers, intrusive lighting, or special scanning equipment, thus it is inexpensive and easy to use.</p></div>","PeriodicalId":35798,"journal":{"name":"自动化学报","volume":"40 10","pages":"Pages 2245-2252"},"PeriodicalIF":0.0000,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S1874-1029(14)60361-X","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"自动化学报","FirstCategoryId":"1093","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S187410291460361X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 3
Abstract
In this paper, we present a system for real-time performance-driven facial animation. With the system, the user can control the facial expression of a digital character by acting out the desired facial action in front of an ordinary camera. First, we create a muscle-based 3D face model. The muscle actuation parameters are used to animate the face model. To increase the reality of facial animation, the orbicularis oris in our face model is divided into the inner part and outer part. We also establish the relationship between jaw rotation and facial surface deformation. Second, a real-time facial tracking method is employed to track the facial features of a performer in the video. Finally, the tracked facial feature points are used to estimate muscle actuation parameters to drive the face model. Experimental results show that our system runs in real time and outputs realistic facial animations. Compared with most existing performance-based facial animation systems, ours does not require facial markers, intrusive lighting, or special scanning equipment, thus it is inexpensive and easy to use.
自动化学报Computer Science-Computer Graphics and Computer-Aided Design
CiteScore
4.80
自引率
0.00%
发文量
6655
期刊介绍:
ACTA AUTOMATICA SINICA is a joint publication of Chinese Association of Automation and the Institute of Automation, the Chinese Academy of Sciences. The objective is the high quality and rapid publication of the articles, with a strong focus on new trends, original theoretical and experimental research and developments, emerging technology, and industrial standards in automation.