{"title":"Facial retargeting using neural networks","authors":"T. Costigan, Mukta Prasad, R. Mcdonnell","doi":"10.1145/2668064.2668099","DOIUrl":null,"url":null,"abstract":"Mapping the motion of an actor's face to a virtual model is a difficult but important problem, especially as fully animated characters are becoming more common in games and movies. Many methods have been proposed but most require the source and target to be structurally similar. Optical motion capture markers and blendshape weights are an example of topologically incongruous source and target examples that do not have a simple mapping between one another. In this paper, we created a system capable of determining this mapping through supervised learning of a small training dataset. Radial Basis Function Networks (RBFNs) have been used for retargeting markers to blendshape weights before but to our knowledge Multi-Layer Perceptron Artificial Neural Networks (referred to as ANNs) have not been employed in this way. We hypothesized that ANNs would result in a superior retargeting solution compared to the RBFN, due to their theoretically greater representational power. We implemented a retargeting system using ANNs and RBFNs for comparison. Our results found that both systems produced similar results (figure 1) and in some cases the ANN proved to be more expressive although the ANN was more difficult to work with.","PeriodicalId":138747,"journal":{"name":"Proceedings of the 7th International Conference on Motion in Games","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 7th International Conference on Motion in Games","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2668064.2668099","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11
Abstract
Mapping the motion of an actor's face to a virtual model is a difficult but important problem, especially as fully animated characters are becoming more common in games and movies. Many methods have been proposed but most require the source and target to be structurally similar. Optical motion capture markers and blendshape weights are an example of topologically incongruous source and target examples that do not have a simple mapping between one another. In this paper, we created a system capable of determining this mapping through supervised learning of a small training dataset. Radial Basis Function Networks (RBFNs) have been used for retargeting markers to blendshape weights before but to our knowledge Multi-Layer Perceptron Artificial Neural Networks (referred to as ANNs) have not been employed in this way. We hypothesized that ANNs would result in a superior retargeting solution compared to the RBFN, due to their theoretically greater representational power. We implemented a retargeting system using ANNs and RBFNs for comparison. Our results found that both systems produced similar results (figure 1) and in some cases the ANN proved to be more expressive although the ANN was more difficult to work with.