{"title":"使用几何和光度局部特征的三维点云鲁棒描述符","authors":"Hyoseok Hwang, S. Hyung, Sukjune Yoon, K. Roh","doi":"10.1109/IROS.2012.6385920","DOIUrl":null,"url":null,"abstract":"The robust perception of robots is strongly needed to handle various objects skillfully. In this paper, we propose a novel approach to recognize objects and estimate their 6-DOF pose using 3D feature descriptors, called Geometric and Photometric Local Feature (GPLF). The proposed descriptors use both the geometric and photometric information of 3D point clouds from RGB-D camera and integrate those information into efficient descriptors. GPLF shows robust discriminative performance regardless of characteristics such as shapes or appearances of objects in cluttered scenes. The experimental results show how well the proposed approach classifies and identify objects. The performance of pose estimation is robust and stable enough for the robot to manipulate objects. We also compare the proposed approach with previous approaches that use partial information of objects with a representative large-scale RGB-D object dataset.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"59 1","pages":"4027-4033"},"PeriodicalIF":0.0000,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Robust descriptors for 3D point clouds using Geometric and Photometric Local Feature\",\"authors\":\"Hyoseok Hwang, S. Hyung, Sukjune Yoon, K. Roh\",\"doi\":\"10.1109/IROS.2012.6385920\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The robust perception of robots is strongly needed to handle various objects skillfully. In this paper, we propose a novel approach to recognize objects and estimate their 6-DOF pose using 3D feature descriptors, called Geometric and Photometric Local Feature (GPLF). The proposed descriptors use both the geometric and photometric information of 3D point clouds from RGB-D camera and integrate those information into efficient descriptors. GPLF shows robust discriminative performance regardless of characteristics such as shapes or appearances of objects in cluttered scenes. The experimental results show how well the proposed approach classifies and identify objects. The performance of pose estimation is robust and stable enough for the robot to manipulate objects. We also compare the proposed approach with previous approaches that use partial information of objects with a representative large-scale RGB-D object dataset.\",\"PeriodicalId\":6358,\"journal\":{\"name\":\"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems\",\"volume\":\"59 1\",\"pages\":\"4027-4033\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-12-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IROS.2012.6385920\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IROS.2012.6385920","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Robust descriptors for 3D point clouds using Geometric and Photometric Local Feature
The robust perception of robots is strongly needed to handle various objects skillfully. In this paper, we propose a novel approach to recognize objects and estimate their 6-DOF pose using 3D feature descriptors, called Geometric and Photometric Local Feature (GPLF). The proposed descriptors use both the geometric and photometric information of 3D point clouds from RGB-D camera and integrate those information into efficient descriptors. GPLF shows robust discriminative performance regardless of characteristics such as shapes or appearances of objects in cluttered scenes. The experimental results show how well the proposed approach classifies and identify objects. The performance of pose estimation is robust and stable enough for the robot to manipulate objects. We also compare the proposed approach with previous approaches that use partial information of objects with a representative large-scale RGB-D object dataset.