Posture reconstruction using Kinect with a probabilistic model

Liuyang Zhou, Zhiguang Liu, Howard Leung, Hubert P. H. Shum
{"title":"Posture reconstruction using Kinect with a probabilistic model","authors":"Liuyang Zhou, Zhiguang Liu, Howard Leung, Hubert P. H. Shum","doi":"10.1145/2671015.2671021","DOIUrl":null,"url":null,"abstract":"Recent work has shown that depth image based 3D posture estimation hardware such as Kinect has made interactive applications more popular. However, it is still challenging to accurately recognize postures from a single depth camera due to the inherently noisy data derived from depth images and self-occluding action performed by the user. While previous research has shown that data-driven methods can be used to reconstruct the correct postures, they usually require a large posture database, which greatly limit the usability for systems with constrained hardware such as game console. To solve this problem, we present a new probabilistic framework to enhance the accuracy of the postures live captured by Kinect. We adopt the Gaussian Process model as a prior to leverage position data obtained from Kinect and marker-based motion capture system. We also incorporate a temporal consistency term into the optimization framework to constrain the velocity variations between successive frames. To ensure that the reconstructed posture resembles the observed input data from Kinect when its tracking result is good, we embed joint reliability into the optimization framework. Experimental results demonstrate that our system can generate high quality postures even under severe self-occlusion situations, which is beneficial for real-time posture based applications such as motion-based gaming and sport training.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"117-125"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"28","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2671015.2671021","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 28

Abstract

Recent work has shown that depth image based 3D posture estimation hardware such as Kinect has made interactive applications more popular. However, it is still challenging to accurately recognize postures from a single depth camera due to the inherently noisy data derived from depth images and self-occluding action performed by the user. While previous research has shown that data-driven methods can be used to reconstruct the correct postures, they usually require a large posture database, which greatly limit the usability for systems with constrained hardware such as game console. To solve this problem, we present a new probabilistic framework to enhance the accuracy of the postures live captured by Kinect. We adopt the Gaussian Process model as a prior to leverage position data obtained from Kinect and marker-based motion capture system. We also incorporate a temporal consistency term into the optimization framework to constrain the velocity variations between successive frames. To ensure that the reconstructed posture resembles the observed input data from Kinect when its tracking result is good, we embed joint reliability into the optimization framework. Experimental results demonstrate that our system can generate high quality postures even under severe self-occlusion situations, which is beneficial for real-time posture based applications such as motion-based gaming and sport training.
基于概率模型的Kinect姿态重建
最近的研究表明,基于深度图像的3D姿态估计硬件(如Kinect)使交互式应用更加流行。然而,由于来自深度图像的固有噪声数据和用户执行的自遮挡动作,从单个深度相机中准确识别姿势仍然具有挑战性。虽然以前的研究表明,数据驱动的方法可以用来重建正确的姿势,但它们通常需要一个大的姿势数据库,这极大地限制了硬件受限的系统(如游戏机)的可用性。为了解决这个问题,我们提出了一个新的概率框架来提高Kinect实时捕捉姿势的准确性。我们采用高斯过程模型作为先前利用从Kinect和基于标记的动作捕捉系统获得的位置数据。我们还在优化框架中加入了时间一致性项,以约束连续帧之间的速度变化。为了保证重建的姿态在跟踪效果良好的情况下与Kinect观察到的输入数据相似,我们将关节可靠性嵌入到优化框架中。实验结果表明,该系统可以在严重的自遮挡情况下生成高质量的姿势,这有利于基于动作的游戏和运动训练等基于实时姿势的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信