A new method for fast detection and pose estimation of texture-less industrial parts*

Ziqi Chai, Sheng Bi, Zhengya Guo, Z. Xiong
{"title":"A new method for fast detection and pose estimation of texture-less industrial parts*","authors":"Ziqi Chai, Sheng Bi, Zhengya Guo, Z. Xiong","doi":"10.1109/M2VIP.2018.8600863","DOIUrl":null,"url":null,"abstract":"Estimation of the 6-Dof pose of 3d objects has been a hot research field for a long time. When robots and cameras are integrated into a system, the pose of the object can be estimated through the camera, and then the robot can be used to manipulate the object accurately. Traditional object pose estimation methods include the template-matching based method and invariant feature-based method. The method based on invariant features requires the extraction of invariant features from images with rich texture, so it is not suitable for texture-less parts, which are common in industrial applications. The template-matching method is based on edge and contour information, so it is more suitable for part detection and pose estimation of industrial applications. LINEMOD proposed by Hinterstoisser is a successful template matching method, which accelerates the template matching process through a specially designed storage structure. However, the template-based matching method generally adopts sliding window method and is very time-consuming in computation, which makes it impractical for robotic application. In this paper, we propose a new method, which combines Fully Convolutional Network (FCN) with LINEMOD algorithm. With this method, the detection and location of the object in the image can be archived quickly. Then the local image, instead of the whole image, is used for LINEMOD template matching. Experimental results show that, compared with the standard LINEMOD method, the pose estimation speed can be increased and consistent matching results can be obtained.","PeriodicalId":365579,"journal":{"name":"2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/M2VIP.2018.8600863","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Estimation of the 6-Dof pose of 3d objects has been a hot research field for a long time. When robots and cameras are integrated into a system, the pose of the object can be estimated through the camera, and then the robot can be used to manipulate the object accurately. Traditional object pose estimation methods include the template-matching based method and invariant feature-based method. The method based on invariant features requires the extraction of invariant features from images with rich texture, so it is not suitable for texture-less parts, which are common in industrial applications. The template-matching method is based on edge and contour information, so it is more suitable for part detection and pose estimation of industrial applications. LINEMOD proposed by Hinterstoisser is a successful template matching method, which accelerates the template matching process through a specially designed storage structure. However, the template-based matching method generally adopts sliding window method and is very time-consuming in computation, which makes it impractical for robotic application. In this paper, we propose a new method, which combines Fully Convolutional Network (FCN) with LINEMOD algorithm. With this method, the detection and location of the object in the image can be archived quickly. Then the local image, instead of the whole image, is used for LINEMOD template matching. Experimental results show that, compared with the standard LINEMOD method, the pose estimation speed can be increased and consistent matching results can be obtained.
无纹理工业零件快速检测与姿态估计新方法*
三维物体的六自由度姿态估计一直是一个研究热点。当机器人和相机集成到一个系统中,可以通过相机估计物体的姿态,然后使用机器人精确地操纵物体。传统的目标姿态估计方法包括基于模板匹配的方法和基于不变特征的方法。基于不变特征的方法需要从纹理丰富的图像中提取不变特征,因此不适合工业应用中常见的无纹理部件。模板匹配方法基于边缘和轮廓信息,因此更适合工业应用中的零件检测和位姿估计。Hinterstoisser提出的LINEMOD是一种成功的模板匹配方法,该方法通过特殊设计的存储结构加快了模板匹配过程。然而,基于模板的匹配方法一般采用滑动窗口方法,计算量大,不适合机器人应用。本文提出了一种将全卷积网络(FCN)与LINEMOD算法相结合的新方法。利用该方法可以快速地对图像中的目标进行检测和定位。然后使用局部图像而不是整个图像进行LINEMOD模板匹配。实验结果表明,与标准LINEMOD方法相比,该方法可以提高姿态估计速度,并获得一致的匹配结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信