基于Kinect的三维动作识别的关节-三重运动图像和局部二值模式

Faisal Ahmed, Padma Polash Paul, M. Gavrilova
{"title":"基于Kinect的三维动作识别的关节-三重运动图像和局部二值模式","authors":"Faisal Ahmed, Padma Polash Paul, M. Gavrilova","doi":"10.1145/2915926.2915937","DOIUrl":null,"url":null,"abstract":"This paper presents a new action recognition method that utilizes the 3D skeletal motion data captured using the Kinect depth sensor. We propose a robust view-invariant joint motion representation based on the spatio-temporal changes in relative angles among the different skeletal joint-triplets, namely the joint relative angle (JRA). A sequence of JRAs obtained for a particular joint-triplet intuitively represents the level of involvement of those joints in performing a specific action. Collection of all joint-triplet JRA sequences is then utilized to construct a spatial holistic description of action-specific motion patterns, namely the 2D joint-triplet motion image. The proposed method exploits a local texture analysis method, the local binary pattern (LBP), to highlight micro-level texture details in the motion images. This process isolates prototypical features for different actions. LBP histogram features are then projected into a discriminant Fisher-space, resulting in more compact and disjoint feature clusters representing individual actions. The performance of the proposed method is evaluated using two publicly available Kinect action databases. Extensive experiments show advantage of the proposed joint-triplet motion image and LBP-based action recognition approach over existing methods.","PeriodicalId":409915,"journal":{"name":"Proceedings of the 29th International Conference on Computer Animation and Social Agents","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"Joint-Triplet Motion Image and Local Binary Pattern for 3D Action Recognition Using Kinect\",\"authors\":\"Faisal Ahmed, Padma Polash Paul, M. Gavrilova\",\"doi\":\"10.1145/2915926.2915937\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a new action recognition method that utilizes the 3D skeletal motion data captured using the Kinect depth sensor. We propose a robust view-invariant joint motion representation based on the spatio-temporal changes in relative angles among the different skeletal joint-triplets, namely the joint relative angle (JRA). A sequence of JRAs obtained for a particular joint-triplet intuitively represents the level of involvement of those joints in performing a specific action. Collection of all joint-triplet JRA sequences is then utilized to construct a spatial holistic description of action-specific motion patterns, namely the 2D joint-triplet motion image. The proposed method exploits a local texture analysis method, the local binary pattern (LBP), to highlight micro-level texture details in the motion images. This process isolates prototypical features for different actions. LBP histogram features are then projected into a discriminant Fisher-space, resulting in more compact and disjoint feature clusters representing individual actions. The performance of the proposed method is evaluated using two publicly available Kinect action databases. Extensive experiments show advantage of the proposed joint-triplet motion image and LBP-based action recognition approach over existing methods.\",\"PeriodicalId\":409915,\"journal\":{\"name\":\"Proceedings of the 29th International Conference on Computer Animation and Social Agents\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-05-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 29th International Conference on Computer Animation and Social Agents\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2915926.2915937\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 29th International Conference on Computer Animation and Social Agents","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2915926.2915937","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13

摘要

本文提出了一种利用Kinect深度传感器捕获的三维骨骼运动数据进行动作识别的新方法。提出了一种基于不同骨骼关节三联体相对角度时空变化的鲁棒关节运动不变性表征方法,即关节相对角度(JRA)。获得的特定关节三联体的JRAs序列直观地表示了这些关节在执行特定动作时的参与程度。然后利用所有关节-三联体JRA序列的集合来构建特定动作模式的空间整体描述,即二维关节-三联体运动图像。该方法利用局部纹理分析方法——局部二值模式(LBP)来突出运动图像中的微观纹理细节。这个过程隔离了不同动作的原型特征。然后将LBP直方图特征投影到判别fisher空间中,从而产生更紧凑和不相交的特征簇,代表单个动作。使用两个公开的Kinect动作数据库对所提出方法的性能进行了评估。大量的实验表明,所提出的联合三联体运动图像和基于lbp的动作识别方法优于现有方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Joint-Triplet Motion Image and Local Binary Pattern for 3D Action Recognition Using Kinect
This paper presents a new action recognition method that utilizes the 3D skeletal motion data captured using the Kinect depth sensor. We propose a robust view-invariant joint motion representation based on the spatio-temporal changes in relative angles among the different skeletal joint-triplets, namely the joint relative angle (JRA). A sequence of JRAs obtained for a particular joint-triplet intuitively represents the level of involvement of those joints in performing a specific action. Collection of all joint-triplet JRA sequences is then utilized to construct a spatial holistic description of action-specific motion patterns, namely the 2D joint-triplet motion image. The proposed method exploits a local texture analysis method, the local binary pattern (LBP), to highlight micro-level texture details in the motion images. This process isolates prototypical features for different actions. LBP histogram features are then projected into a discriminant Fisher-space, resulting in more compact and disjoint feature clusters representing individual actions. The performance of the proposed method is evaluated using two publicly available Kinect action databases. Extensive experiments show advantage of the proposed joint-triplet motion image and LBP-based action recognition approach over existing methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信