{"title":"A Robust Method for Hands Gesture Recognition from Egocentric Depth Sensor","authors":"Ye Bai, Yue Qi","doi":"10.1109/icvrv.2018.00015","DOIUrl":null,"url":null,"abstract":"We present a method for robust and accurate hand pose recognition from egocentric depth cameras. Our method combines CNN based hand pose estimation and joint locations based hand gesture recognition. In pose estimation stage, we use a hand geometry prior network to estimate the hand pose. In gesture recognition stage, we defined a hand language which based on a set of pre-define basic propositions, obtained by applying four predicate types to the fingers and palm states. The hand language is used to convert the estimated joint location to hand gesture. Our experimental results indicate that the method enables robust and accurate gesture recognition in self-occlusion environment.","PeriodicalId":159517,"journal":{"name":"2018 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 International Conference on Virtual Reality and Visualization (ICVRV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icvrv.2018.00015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
We present a method for robust and accurate hand pose recognition from egocentric depth cameras. Our method combines CNN based hand pose estimation and joint locations based hand gesture recognition. In pose estimation stage, we use a hand geometry prior network to estimate the hand pose. In gesture recognition stage, we defined a hand language which based on a set of pre-define basic propositions, obtained by applying four predicate types to the fingers and palm states. The hand language is used to convert the estimated joint location to hand gesture. Our experimental results indicate that the method enables robust and accurate gesture recognition in self-occlusion environment.