{"title":"Tracking skeletal fusion feature for one shot learning gesture recognition","authors":"Li Xuejiao, S. Yongqing","doi":"10.1109/ICIVC.2017.7984545","DOIUrl":null,"url":null,"abstract":"Accessibility of RGB-D sensors have facilitated the research in gesture recognition. During sundry approaches, it is found that skeleton information is significant especially for one shot learning by virtue of the minimum requirement of data. We made a review on state-of-the-art approaches for gesture recognition in one shot learning. Based on bag of visual model (BOVW), this paper presents a study on skeletal tracking from RGB-D and puts forward a novel skeletal fusion feature extracted from these data, namely skeletal filtered features around key points (SFFK). The proposed SFFK feature is efficient, precise and robust. Efforts were made to optimize the gesture segmentation algorithm based on dynamic time warping (DTW). We propose different ways to gain the motion matrix, during which we find one performs best. That is taking OR operation on two difference images obtained from three adjacent frames. Finally, we evaluated our approach on the ChaLearn gesture dataset (CGD). The results show that our approach is remarkably superior to those existed approaches on CGD.","PeriodicalId":181522,"journal":{"name":"2017 2nd International Conference on Image, Vision and Computing (ICIVC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 2nd International Conference on Image, Vision and Computing (ICIVC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIVC.2017.7984545","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Accessibility of RGB-D sensors have facilitated the research in gesture recognition. During sundry approaches, it is found that skeleton information is significant especially for one shot learning by virtue of the minimum requirement of data. We made a review on state-of-the-art approaches for gesture recognition in one shot learning. Based on bag of visual model (BOVW), this paper presents a study on skeletal tracking from RGB-D and puts forward a novel skeletal fusion feature extracted from these data, namely skeletal filtered features around key points (SFFK). The proposed SFFK feature is efficient, precise and robust. Efforts were made to optimize the gesture segmentation algorithm based on dynamic time warping (DTW). We propose different ways to gain the motion matrix, during which we find one performs best. That is taking OR operation on two difference images obtained from three adjacent frames. Finally, we evaluated our approach on the ChaLearn gesture dataset (CGD). The results show that our approach is remarkably superior to those existed approaches on CGD.