{"title":"Polynormal Fisher vector for activity recognition from depth sequences","authors":"Xiaodong Yang, Yingli Tian","doi":"10.1145/2668956.2668962","DOIUrl":null,"url":null,"abstract":"The advent of depth sensors has facilitated a variety of visual recognition tasks including human activity understanding. This paper presents a novel feature representation to recognize human activities from video sequences captured by a depth camera. We assemble local neighboring hypersurface normals from a depth sequence to form the polynormal which jointly encodes local motion and shape cues. Fisher vector is employed to aggregate the low-level polynormals into the Polynormal Fisher Vector. In order to capture the global spatial layout and temporal order, we employ a spatio-temporal pyramid to subdivide a depth sequence into a set of space-time cells. Polynormal Fisher Vectors from these cells are combined as the final representation of a depth video. Experimental results demonstrate that our method achieves the state-of-the-art results on the two public benchmark datasets, i.e., MSRAction3D and MSRGesture3D.","PeriodicalId":220010,"journal":{"name":"SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2668956.2668962","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
The advent of depth sensors has facilitated a variety of visual recognition tasks including human activity understanding. This paper presents a novel feature representation to recognize human activities from video sequences captured by a depth camera. We assemble local neighboring hypersurface normals from a depth sequence to form the polynormal which jointly encodes local motion and shape cues. Fisher vector is employed to aggregate the low-level polynormals into the Polynormal Fisher Vector. In order to capture the global spatial layout and temporal order, we employ a spatio-temporal pyramid to subdivide a depth sequence into a set of space-time cells. Polynormal Fisher Vectors from these cells are combined as the final representation of a depth video. Experimental results demonstrate that our method achieves the state-of-the-art results on the two public benchmark datasets, i.e., MSRAction3D and MSRGesture3D.