Alexandre Perez, Hedi Tabia, D. Declercq, A. Zanotti
{"title":"Feature covariance for human action recognition","authors":"Alexandre Perez, Hedi Tabia, D. Declercq, A. Zanotti","doi":"10.1109/IPTA.2016.7820982","DOIUrl":null,"url":null,"abstract":"In this paper, we present a novel method for human action recognition using covariance features. Computationally efficient action features are extracted from the skeleton of the subject performing the action. They aim to capture relative positions of the joints and motion over time. These features are encoded into a compact representation using a covariance matrix. We evaluate the performance of the proposed method and demonstrate its superiority compared to related state-of-the-art methods on various datasets, including the MSR Action 3D, the MSR Daily Activity 3D and the UTKinect-Action dataset.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPTA.2016.7820982","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
In this paper, we present a novel method for human action recognition using covariance features. Computationally efficient action features are extracted from the skeleton of the subject performing the action. They aim to capture relative positions of the joints and motion over time. These features are encoded into a compact representation using a covariance matrix. We evaluate the performance of the proposed method and demonstrate its superiority compared to related state-of-the-art methods on various datasets, including the MSR Action 3D, the MSR Daily Activity 3D and the UTKinect-Action dataset.