{"title":"Context-Based Appearance Descriptor for 3D Human Pose Estimation from Monocular Images","authors":"S. Sedai, Bennamoun, D. Huynh","doi":"10.1109/DICTA.2009.81","DOIUrl":null,"url":null,"abstract":"In this paper we propose a novel appearance descriptor for 3D human pose estimation from monocular images using a learning-based technique. Our image-descriptor is based on the intermediate local appearance descriptors that we design to encapsulate local appearance context and to be resilient to noise. We encode the image by the histogram of such local appearance context descriptors computed in an image to obtain the final image-descriptor for pose estimation. We name the final image-descriptor the Histogram of Local Appearance Context (HLAC). We then use Relevance Vector Machine (RVM) regression to learn the direct mapping between the proposed HLAC image-descriptor space and the 3D pose space. Given a test image, we first compute the HLAC descriptor and then input it to the trained regressor to obtain the final output pose in real time. We compared our approach with other methods using a synchronized video and 3D motion dataset. We compared our proposed HLAC image-descriptor with the Histogram of Shape Context and Histogram of SIFT like descriptors. The evaluation results show that HLAC descriptor outperforms both of them in the context of 3D Human pose estimation.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"123 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"20","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 Digital Image Computing: Techniques and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA.2009.81","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 20
Abstract
In this paper we propose a novel appearance descriptor for 3D human pose estimation from monocular images using a learning-based technique. Our image-descriptor is based on the intermediate local appearance descriptors that we design to encapsulate local appearance context and to be resilient to noise. We encode the image by the histogram of such local appearance context descriptors computed in an image to obtain the final image-descriptor for pose estimation. We name the final image-descriptor the Histogram of Local Appearance Context (HLAC). We then use Relevance Vector Machine (RVM) regression to learn the direct mapping between the proposed HLAC image-descriptor space and the 3D pose space. Given a test image, we first compute the HLAC descriptor and then input it to the trained regressor to obtain the final output pose in real time. We compared our approach with other methods using a synchronized video and 3D motion dataset. We compared our proposed HLAC image-descriptor with the Histogram of Shape Context and Histogram of SIFT like descriptors. The evaluation results show that HLAC descriptor outperforms both of them in the context of 3D Human pose estimation.