Alternative strategies for runtime facial motion capture

Izmeth Siddeek
{"title":"Alternative strategies for runtime facial motion capture","authors":"Izmeth Siddeek","doi":"10.1145/2614106.2614139","DOIUrl":null,"url":null,"abstract":"classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. Facial motion capture has been hitherto, an effective albeit costly means of delivering performances for game characters. Using Kinect hardware we consider a pipeline for delivering game ready performances in Unity, enlisting the talents of actors and game developers. We begin with a review of the following data acquisition pipeline as a basis for motion capture: This data acquisition process informs a pipeline based on the concept of skeletal retargeting whereby the motion capture data stream may be mapped back to a common joint based skeletal system thus rendering it scalable and friendly for implementation into the game development pipeline. In this presentation we hope to take a look at the facial motion capture data set and its applicability to varied characters. Aside from the technical challenges of creating assets for facial motion capture, there is however, the problem of credibly reproducing performances of subjects. As the closer one moves towards realism, the harder it becomes to create an empathic human face. With this in mind we address the phenomenon commonly described as the \" uncanny valley \" in relation to motion captured facial performances and attempt to define the limits of the technology. 3 Conclusion Accessible runtime facial motion capture is an area of growing interest. The advent of the Kinect as a PC peripheral and Mi-crosoft \" s Kinect Fusion Project give us a glimpse into the possibilities afforded by this nascent technology. The implications of such technology are wide-ranging and ultimately offer the prospect of revolutionizing interactive entertainment. Sequence of expressions with corresponding facial reference.","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"73 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM SIGGRAPH 2014 Talks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2614106.2614139","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. Facial motion capture has been hitherto, an effective albeit costly means of delivering performances for game characters. Using Kinect hardware we consider a pipeline for delivering game ready performances in Unity, enlisting the talents of actors and game developers. We begin with a review of the following data acquisition pipeline as a basis for motion capture: This data acquisition process informs a pipeline based on the concept of skeletal retargeting whereby the motion capture data stream may be mapped back to a common joint based skeletal system thus rendering it scalable and friendly for implementation into the game development pipeline. In this presentation we hope to take a look at the facial motion capture data set and its applicability to varied characters. Aside from the technical challenges of creating assets for facial motion capture, there is however, the problem of credibly reproducing performances of subjects. As the closer one moves towards realism, the harder it becomes to create an empathic human face. With this in mind we address the phenomenon commonly described as the " uncanny valley " in relation to motion captured facial performances and attempt to define the limits of the technology. 3 Conclusion Accessible runtime facial motion capture is an area of growing interest. The advent of the Kinect as a PC peripheral and Mi-crosoft " s Kinect Fusion Project give us a glimpse into the possibilities afforded by this nascent technology. The implications of such technology are wide-ranging and ultimately offer the prospect of revolutionizing interactive entertainment. Sequence of expressions with corresponding facial reference.
运行时面部动作捕捉的替代策略
课堂使用是免费的,前提是副本不是为了商业利益而制作或分发的,并且副本在第一页上带有本通知和完整的引用。本作品的第三方组件的版权必须得到尊重。对于所有其他用途,请联系所有者/作者。迄今为止,面部动作捕捉一直是一种有效的、但代价高昂的游戏角色表演方式。使用Kinect硬件,我们考虑在Unity中提供游戏准备表演的管道,招募演员和游戏开发者的才能。我们首先回顾以下数据采集管道作为动作捕捉的基础:这个数据采集过程通知了一个基于骨骼重定向概念的管道,据此,动作捕捉数据流可以映射回一个共同的基于关节的骨骼系统,从而使其可扩展和友好地实现到游戏开发管道。在这次演讲中,我们希望看看面部动作捕捉数据集及其对不同角色的适用性。除了为面部动作捕捉创造资产的技术挑战之外,还有一个问题是可信地再现受试者的表演。当一个人越接近现实主义,就越难创造出一个有同理心的人脸。考虑到这一点,我们解决了与动作捕捉面部表现有关的通常被称为“恐怖谷”的现象,并试图定义该技术的极限。无障碍运行时面部动作捕捉是一个越来越受关注的领域。作为PC外设的Kinect的出现以及微软的Kinect融合项目让我们看到了这一新兴技术所带来的可能性。这种技术的影响是广泛的,并最终提供了革命性的互动娱乐的前景。具有相应面部参考的表情序列。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信