M/sup 3/: marker-free model reconstruction and motion tracking from 3D voxel data

Edilson de Aguiar, C. Theobalt, M. Magnor, H. Theisel, H. Seidel
{"title":"M/sup 3/: marker-free model reconstruction and motion tracking from 3D voxel data","authors":"Edilson de Aguiar, C. Theobalt, M. Magnor, H. Theisel, H. Seidel","doi":"10.1109/PCCGA.2004.1348340","DOIUrl":null,"url":null,"abstract":"In computer animation, human motion capture from video is a widely used technique to acquire motion parameters. The acquisition process typically requires an intrusion into the scene in the form of optical markers which are used to estimate the parameters of motion as well as the kinematic structure of the performer. Marker-free optical motion capture approaches exist, but due to their dependence on a specific type of a priori model they can hardly be used to track other subjects, e.g. animals. To bridge the gap between the generality of marker-based methods and the applicability of marker-free methods, we present a flexible non-intrusive approach that estimates both, a kinematic model and its parameters of motion from a sequence of voxel-volumes. The volume sequences are reconstructed from multi-view video data by means of a shape-from-silhouette technique. The described method is well-suited for but not limited to motion capture of human subjects.","PeriodicalId":264796,"journal":{"name":"12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PCCGA.2004.1348340","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13

Abstract

In computer animation, human motion capture from video is a widely used technique to acquire motion parameters. The acquisition process typically requires an intrusion into the scene in the form of optical markers which are used to estimate the parameters of motion as well as the kinematic structure of the performer. Marker-free optical motion capture approaches exist, but due to their dependence on a specific type of a priori model they can hardly be used to track other subjects, e.g. animals. To bridge the gap between the generality of marker-based methods and the applicability of marker-free methods, we present a flexible non-intrusive approach that estimates both, a kinematic model and its parameters of motion from a sequence of voxel-volumes. The volume sequences are reconstructed from multi-view video data by means of a shape-from-silhouette technique. The described method is well-suited for but not limited to motion capture of human subjects.
M/sup 3/:基于3D体素数据的无标记模型重建和运动跟踪
在计算机动画中,从视频中获取人体运动参数是一种被广泛应用的技术。采集过程通常需要以光学标记的形式侵入场景,用于估计运动参数以及表演者的运动学结构。无标记光学运动捕捉方法是存在的,但由于它们依赖于特定类型的先验模型,它们很难用于跟踪其他对象,例如动物。为了弥合基于标记的方法的通用性和无标记方法的适用性之间的差距,我们提出了一种灵活的非侵入性方法,该方法可以从一系列体素体积中估计运动学模型及其运动参数。利用形状-轮廓技术从多视点视频数据中重建体序列。所描述的方法非常适合但不限于人体主体的动作捕捉。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信