{"title":"Hand tracking based on the combination of 2D and 3D model in gaze-directed video","authors":"Li Sun, Guizhong Liu","doi":"10.1109/ICME.2011.6012236","DOIUrl":null,"url":null,"abstract":"This paper investigates model based hand tracking in gaze-directed video which contains everyday manipulation activity of human in kitchen environment. The video is recorded by a gaze-directed camera, which can actively directs at the visual attention area from the person who wears the camera. Here we present a method based on the combination of 2D and 3D hand model, which can estimate the position of hand in image accurately and the pose of hand in 3D roughly. The method uses 2D model tracking result to initialize and predict 3D tracking, which saves the number of particles and makes it possible for local configuration adapting. To evaluate our result, we try our algorithm on several pieces of video both from normal camera and gaze-directed camera. The error ratio of the distance between the ground truth and tracking result is used as an objective measurement for evaluating our method. Trajectory of hand movement and results of projected model for every frame show that our method is effective and makes a good foundation for future recognition and analysis.","PeriodicalId":433997,"journal":{"name":"2011 IEEE International Conference on Multimedia and Expo","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE International Conference on Multimedia and Expo","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICME.2011.6012236","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
This paper investigates model based hand tracking in gaze-directed video which contains everyday manipulation activity of human in kitchen environment. The video is recorded by a gaze-directed camera, which can actively directs at the visual attention area from the person who wears the camera. Here we present a method based on the combination of 2D and 3D hand model, which can estimate the position of hand in image accurately and the pose of hand in 3D roughly. The method uses 2D model tracking result to initialize and predict 3D tracking, which saves the number of particles and makes it possible for local configuration adapting. To evaluate our result, we try our algorithm on several pieces of video both from normal camera and gaze-directed camera. The error ratio of the distance between the ground truth and tracking result is used as an objective measurement for evaluating our method. Trajectory of hand movement and results of projected model for every frame show that our method is effective and makes a good foundation for future recognition and analysis.