Robot gluing localization method based on monocular vision

Jinghao Fan, Yingjie Zhang, Shihan Peng, Yuxiang Zhang, G. Zhang
{"title":"Robot gluing localization method based on monocular vision","authors":"Jinghao Fan, Yingjie Zhang, Shihan Peng, Yuxiang Zhang, G. Zhang","doi":"10.1117/12.2643961","DOIUrl":null,"url":null,"abstract":"A localization method based on monocular vision is proposed to solve the problem of poor flexibility, high cost and unstable accuracy of glue dispensing robot. The method includes the workpiece image feature extraction method based on distribution model and the optimized PNP algorithm based on depth calibration, which can locate the threedimensional coordinates of the workpiece and further generate the gluing track. Firstly, the layout and local coordinates of feature points are determined according to the workpiece model and gluing process, and the feature distribution model and template set are established. Then the image coordinates of feature points are extracted step by step by using workpiece contour features and image gray features, combining multi template and multi angle matching with shape detection, and using acceleration strategies such as image pyramid and angle layer by layer subdivision. Finally, the PNP algorithm is optimized in the Z direction through the depth calibration method to realize the high-precision positioning of the workpiece. The localization experiments of various types of reducer shells under different imaging environments were carried out. The experimental results show that the method has better feature extraction effect for workpieces with complex structure in chaotic environment, and the maximum localization error in one direction is within ± 0.5 mm, which meets the application needs of robot glue positioning. The method can detect the offset of 6 degrees of freedom of the target workpiece at the same time, which has a wider application than the general 2D visual localization method. It can also be used for the localization of parts in other scenes.","PeriodicalId":184319,"journal":{"name":"Optical Frontiers","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optical Frontiers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2643961","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

A localization method based on monocular vision is proposed to solve the problem of poor flexibility, high cost and unstable accuracy of glue dispensing robot. The method includes the workpiece image feature extraction method based on distribution model and the optimized PNP algorithm based on depth calibration, which can locate the threedimensional coordinates of the workpiece and further generate the gluing track. Firstly, the layout and local coordinates of feature points are determined according to the workpiece model and gluing process, and the feature distribution model and template set are established. Then the image coordinates of feature points are extracted step by step by using workpiece contour features and image gray features, combining multi template and multi angle matching with shape detection, and using acceleration strategies such as image pyramid and angle layer by layer subdivision. Finally, the PNP algorithm is optimized in the Z direction through the depth calibration method to realize the high-precision positioning of the workpiece. The localization experiments of various types of reducer shells under different imaging environments were carried out. The experimental results show that the method has better feature extraction effect for workpieces with complex structure in chaotic environment, and the maximum localization error in one direction is within ± 0.5 mm, which meets the application needs of robot glue positioning. The method can detect the offset of 6 degrees of freedom of the target workpiece at the same time, which has a wider application than the general 2D visual localization method. It can also be used for the localization of parts in other scenes.
基于单目视觉的机器人粘接定位方法
针对点胶机器人柔性差、成本高、精度不稳定等问题,提出了一种基于单目视觉的定位方法。该方法包括基于分布模型的工件图像特征提取方法和基于深度标定的优化PNP算法,可以定位工件的三维坐标并进一步生成粘接轨迹。首先,根据工件模型和粘接工艺确定特征点的布局和局部坐标,建立特征分布模型和模板集;然后利用工件轮廓特征和图像灰度特征,将多模板、多角度匹配与形状检测相结合,采用图像金字塔、角度逐层细分等加速策略,逐步提取特征点的图像坐标。最后,通过深度标定方法对PNP算法进行Z方向优化,实现工件的高精度定位。在不同成像环境下对不同型号减速器壳体进行了定位实验。实验结果表明,该方法对混沌环境下结构复杂的工件具有较好的特征提取效果,单个方向的最大定位误差在±0.5 mm以内,满足机器人胶水定位的应用需求。该方法可同时检测目标工件6个自由度的偏移量,比一般的二维视觉定位方法具有更广泛的应用。它也可以用于其他场景中零件的定位。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信