Physics-based Motion Retargeting from Sparse Inputs

IF 2.3 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Daniele Reda, Jungdam Won, Yuting Ye, M. V. D. Panne, Alexander W. Winkler
{"title":"Physics-based Motion Retargeting from Sparse Inputs","authors":"Daniele Reda, Jungdam Won, Yuting Ye, M. V. D. Panne, Alexander W. Winkler","doi":"10.1145/3606928","DOIUrl":null,"url":null,"abstract":"Avatars are important to create interactive and immersive experiences in virtual worlds. One challenge in animating these characters to mimic a user's motion is that commercial AR/VR products consist only of a headset and controllers, providing very limited sensor data of the user's pose. Another challenge is that an avatar might have a different skeleton structure than a human and the mapping between them is unclear. In this work we address both of these challenges. We introduce a method to retarget motions in real-time from sparse human sensor data to characters of various morphologies. Our method uses reinforcement learning to train a policy to control characters in a physics simulator. We only require human motion capture data for training, without relying on artist-generated animations for each avatar. This allows us to use large motion capture datasets to train general policies that can track unseen users from real and sparse data in real-time. We demonstrate the feasibility of our approach on three characters with different skeleton structure: a dinosaur, a mouse-like creature and a human. We show that the avatar poses often match the user surprisingly well, despite having no sensor information of the lower body available. We discuss and ablate the important components in our framework, specifically the kinematic retargeting step, the imitation, contact and action reward as well as our asymmetric actor-critic observations. We further explore the robustness of our method in a variety of settings including unbalancing, dancing and sports motions.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":" ","pages":"1 - 19"},"PeriodicalIF":2.3000,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM on computer graphics and interactive techniques","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3606928","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 1

Abstract

Avatars are important to create interactive and immersive experiences in virtual worlds. One challenge in animating these characters to mimic a user's motion is that commercial AR/VR products consist only of a headset and controllers, providing very limited sensor data of the user's pose. Another challenge is that an avatar might have a different skeleton structure than a human and the mapping between them is unclear. In this work we address both of these challenges. We introduce a method to retarget motions in real-time from sparse human sensor data to characters of various morphologies. Our method uses reinforcement learning to train a policy to control characters in a physics simulator. We only require human motion capture data for training, without relying on artist-generated animations for each avatar. This allows us to use large motion capture datasets to train general policies that can track unseen users from real and sparse data in real-time. We demonstrate the feasibility of our approach on three characters with different skeleton structure: a dinosaur, a mouse-like creature and a human. We show that the avatar poses often match the user surprisingly well, despite having no sensor information of the lower body available. We discuss and ablate the important components in our framework, specifically the kinematic retargeting step, the imitation, contact and action reward as well as our asymmetric actor-critic observations. We further explore the robustness of our method in a variety of settings including unbalancing, dancing and sports motions.
稀疏输入的基于物理的运动重定向
化身对于在虚拟世界中创造互动性和沉浸式体验非常重要。让这些角色模仿用户动作的一个挑战是,商业AR/VR产品仅由耳机和控制器组成,提供非常有限的用户姿势传感器数据。另一个挑战是,虚拟角色可能拥有与人类不同的骨骼结构,并且它们之间的映射并不明确。在这项工作中,我们解决了这两个挑战。我们介绍了一种从稀疏的人体传感器数据实时重新定位运动到各种形态特征的方法。我们的方法使用强化学习来训练一个策略来控制物理模拟器中的角色。我们只需要人类动作捕捉数据进行训练,而不依赖于艺术家为每个角色生成的动画。这允许我们使用大型动作捕捉数据集来训练一般策略,这些策略可以从真实和稀疏的数据中实时跟踪看不见的用户。我们以恐龙、类鼠生物和人类这三种不同骨骼结构的角色为例,证明了我们方法的可行性。我们表明,尽管没有下半身的传感器信息,虚拟角色的姿势通常与用户非常匹配。我们讨论并删除了框架中的重要组成部分,特别是运动学重定向步骤,模仿,接触和行动奖励以及我们的不对称行为者批评观察。我们进一步探索了我们的方法在各种环境下的鲁棒性,包括不平衡、舞蹈和运动。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
2.90
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信