{"title":"Object Pose Estimation with Point Cloud Data for Robot Grasping","authors":"Xingfang Wu, Weiming Qu, T. Zhang, D. Luo","doi":"10.1109/ICMA54519.2022.9856092","DOIUrl":null,"url":null,"abstract":"Object pose estimation refers to the estimation of objects’ position and orientation relative to the camera coordinate system using visual information. It is fundamental to grasp point selection and motion planning in robot grasping. Different from other works using depth vision sensors, this work discusses the approach of estimating objects’ pose specially with unilateral and unordered point clouds of single objects in robot grasping. In this paper, we propose to directly consume point clouds to estimate objects’ 3D position and 3D orientations relative to predefined canonical posture, which utilizes the PointCNN [1]. A dataset is also collected specifically for this task, on which we train our models and validate the effectiveness of our proposed method. Code, dataset and pre-trained models are available at https://github.com/shrcrobot/Pose-Estimation","PeriodicalId":120073,"journal":{"name":"2022 IEEE International Conference on Mechatronics and Automation (ICMA)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Mechatronics and Automation (ICMA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMA54519.2022.9856092","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Object pose estimation refers to the estimation of objects’ position and orientation relative to the camera coordinate system using visual information. It is fundamental to grasp point selection and motion planning in robot grasping. Different from other works using depth vision sensors, this work discusses the approach of estimating objects’ pose specially with unilateral and unordered point clouds of single objects in robot grasping. In this paper, we propose to directly consume point clouds to estimate objects’ 3D position and 3D orientations relative to predefined canonical posture, which utilizes the PointCNN [1]. A dataset is also collected specifically for this task, on which we train our models and validate the effectiveness of our proposed method. Code, dataset and pre-trained models are available at https://github.com/shrcrobot/Pose-Estimation