{"title":"An efficient method for human pointing estimation for robot interaction","authors":"S. Ueno, S. Naito, Tsuhan Chen","doi":"10.1109/ICIP.2014.7025309","DOIUrl":null,"url":null,"abstract":"In this paper, we propose an efficient calibration method to estimate the pointing direction via a human pointing gesture to facilitate robot interaction. The ways in which pointing gestures are used by humans to indicate an object are individually diverse. In addition, people do not always point at the object carefully, which means there is a divergence between the line from the eye to the tip of the index finger and the line of sight. Hence, we focus on adapting to these individual ways of pointing to improve the accuracy of target object identification by means of an effective calibration process. We model these individual ways as two offsets, the horizontal offset and the vertical offset. After locating the head and fingertip positions, we learn these offsets for each individual through a training process with the person pointing at the camera. Experimental results show that our proposed method outperforms other conventional head-hand, head-fingertip, and eye-fingertip-based pointing recognition methods.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"07 1","pages":"1545-1549"},"PeriodicalIF":0.0000,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP.2014.7025309","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
In this paper, we propose an efficient calibration method to estimate the pointing direction via a human pointing gesture to facilitate robot interaction. The ways in which pointing gestures are used by humans to indicate an object are individually diverse. In addition, people do not always point at the object carefully, which means there is a divergence between the line from the eye to the tip of the index finger and the line of sight. Hence, we focus on adapting to these individual ways of pointing to improve the accuracy of target object identification by means of an effective calibration process. We model these individual ways as two offsets, the horizontal offset and the vertical offset. After locating the head and fingertip positions, we learn these offsets for each individual through a training process with the person pointing at the camera. Experimental results show that our proposed method outperforms other conventional head-hand, head-fingertip, and eye-fingertip-based pointing recognition methods.