{"title":"Joint-Triplet Motion Image and Local Binary Pattern for 3D Action Recognition Using Kinect","authors":"Faisal Ahmed, Padma Polash Paul, M. Gavrilova","doi":"10.1145/2915926.2915937","DOIUrl":null,"url":null,"abstract":"This paper presents a new action recognition method that utilizes the 3D skeletal motion data captured using the Kinect depth sensor. We propose a robust view-invariant joint motion representation based on the spatio-temporal changes in relative angles among the different skeletal joint-triplets, namely the joint relative angle (JRA). A sequence of JRAs obtained for a particular joint-triplet intuitively represents the level of involvement of those joints in performing a specific action. Collection of all joint-triplet JRA sequences is then utilized to construct a spatial holistic description of action-specific motion patterns, namely the 2D joint-triplet motion image. The proposed method exploits a local texture analysis method, the local binary pattern (LBP), to highlight micro-level texture details in the motion images. This process isolates prototypical features for different actions. LBP histogram features are then projected into a discriminant Fisher-space, resulting in more compact and disjoint feature clusters representing individual actions. The performance of the proposed method is evaluated using two publicly available Kinect action databases. Extensive experiments show advantage of the proposed joint-triplet motion image and LBP-based action recognition approach over existing methods.","PeriodicalId":409915,"journal":{"name":"Proceedings of the 29th International Conference on Computer Animation and Social Agents","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 29th International Conference on Computer Animation and Social Agents","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2915926.2915937","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13
Abstract
This paper presents a new action recognition method that utilizes the 3D skeletal motion data captured using the Kinect depth sensor. We propose a robust view-invariant joint motion representation based on the spatio-temporal changes in relative angles among the different skeletal joint-triplets, namely the joint relative angle (JRA). A sequence of JRAs obtained for a particular joint-triplet intuitively represents the level of involvement of those joints in performing a specific action. Collection of all joint-triplet JRA sequences is then utilized to construct a spatial holistic description of action-specific motion patterns, namely the 2D joint-triplet motion image. The proposed method exploits a local texture analysis method, the local binary pattern (LBP), to highlight micro-level texture details in the motion images. This process isolates prototypical features for different actions. LBP histogram features are then projected into a discriminant Fisher-space, resulting in more compact and disjoint feature clusters representing individual actions. The performance of the proposed method is evaluated using two publicly available Kinect action databases. Extensive experiments show advantage of the proposed joint-triplet motion image and LBP-based action recognition approach over existing methods.