基于CAD模型的点云中三维物体位姿确定

D. Nguyen, J. P. Ko, J. Jeon
{"title":"基于CAD模型的点云中三维物体位姿确定","authors":"D. Nguyen, J. P. Ko, J. Jeon","doi":"10.1109/FCV.2015.7103725","DOIUrl":null,"url":null,"abstract":"This paper introduces improvements to estimate 3D object pose from point clouds. We use point-pair feature for matching instead of traditional approaches using local feature descriptors. In order to obtain high accuracy estimation, a discriminative descriptor is introduced for point-pair features. The object model is a set of point pair descriptors computed from CAD model. The voting process is performed on a local area of each key-point to boost the performance. Due to the simplicity of descriptor, a matching threshold is defined to enable the robustness of the algorithm. A clustering algorithm is defined for grouping similar poses together. Best pose candidates will be selected for refining and final verification will be performed. The robustness and accuracy of our approach are demonstrated through experiments. Our approach can be compared to state-of-the-art algorithms in terms of recognition rates. These high accurate poses especially useful for robot in manipulating objects in the factory. Since our approach does not use color feature, it is independent to light conditions. The system give accurate pose estimation even when there is no light in the area.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Determination of 3D object pose in point cloud with CAD model\",\"authors\":\"D. Nguyen, J. P. Ko, J. Jeon\",\"doi\":\"10.1109/FCV.2015.7103725\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper introduces improvements to estimate 3D object pose from point clouds. We use point-pair feature for matching instead of traditional approaches using local feature descriptors. In order to obtain high accuracy estimation, a discriminative descriptor is introduced for point-pair features. The object model is a set of point pair descriptors computed from CAD model. The voting process is performed on a local area of each key-point to boost the performance. Due to the simplicity of descriptor, a matching threshold is defined to enable the robustness of the algorithm. A clustering algorithm is defined for grouping similar poses together. Best pose candidates will be selected for refining and final verification will be performed. The robustness and accuracy of our approach are demonstrated through experiments. Our approach can be compared to state-of-the-art algorithms in terms of recognition rates. These high accurate poses especially useful for robot in manipulating objects in the factory. Since our approach does not use color feature, it is independent to light conditions. The system give accurate pose estimation even when there is no light in the area.\",\"PeriodicalId\":424974,\"journal\":{\"name\":\"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-05-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/FCV.2015.7103725\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FCV.2015.7103725","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

摘要

本文介绍了基于点云估计三维物体姿态的改进方法。我们使用点对特征代替传统的局部特征描述符进行匹配。为了获得较高的估计精度,对点对特征引入了判别描述符。对象模型是由CAD模型计算得到的一组点对描述符。投票过程在每个关键点的局部区域执行,以提高性能。由于描述符的简单性,定义了匹配阈值以保证算法的鲁棒性。定义了一种聚类算法,将相似的姿态分组在一起。将选择最佳候选姿态进行细化并进行最终验证。通过实验验证了该方法的鲁棒性和准确性。就识别率而言,我们的方法可以与最先进的算法进行比较。这些高精度姿态对机器人在工厂中操纵物体特别有用。由于我们的方法不使用颜色特征,因此它与光线条件无关。即使在没有光线的情况下,该系统也能给出准确的姿势估计。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Determination of 3D object pose in point cloud with CAD model
This paper introduces improvements to estimate 3D object pose from point clouds. We use point-pair feature for matching instead of traditional approaches using local feature descriptors. In order to obtain high accuracy estimation, a discriminative descriptor is introduced for point-pair features. The object model is a set of point pair descriptors computed from CAD model. The voting process is performed on a local area of each key-point to boost the performance. Due to the simplicity of descriptor, a matching threshold is defined to enable the robustness of the algorithm. A clustering algorithm is defined for grouping similar poses together. Best pose candidates will be selected for refining and final verification will be performed. The robustness and accuracy of our approach are demonstrated through experiments. Our approach can be compared to state-of-the-art algorithms in terms of recognition rates. These high accurate poses especially useful for robot in manipulating objects in the factory. Since our approach does not use color feature, it is independent to light conditions. The system give accurate pose estimation even when there is no light in the area.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信