Jinghao Fan, Yingjie Zhang, Shihan Peng, Yuxiang Zhang, G. Zhang
{"title":"Robot gluing localization method based on monocular vision","authors":"Jinghao Fan, Yingjie Zhang, Shihan Peng, Yuxiang Zhang, G. Zhang","doi":"10.1117/12.2643961","DOIUrl":null,"url":null,"abstract":"A localization method based on monocular vision is proposed to solve the problem of poor flexibility, high cost and unstable accuracy of glue dispensing robot. The method includes the workpiece image feature extraction method based on distribution model and the optimized PNP algorithm based on depth calibration, which can locate the threedimensional coordinates of the workpiece and further generate the gluing track. Firstly, the layout and local coordinates of feature points are determined according to the workpiece model and gluing process, and the feature distribution model and template set are established. Then the image coordinates of feature points are extracted step by step by using workpiece contour features and image gray features, combining multi template and multi angle matching with shape detection, and using acceleration strategies such as image pyramid and angle layer by layer subdivision. Finally, the PNP algorithm is optimized in the Z direction through the depth calibration method to realize the high-precision positioning of the workpiece. The localization experiments of various types of reducer shells under different imaging environments were carried out. The experimental results show that the method has better feature extraction effect for workpieces with complex structure in chaotic environment, and the maximum localization error in one direction is within ± 0.5 mm, which meets the application needs of robot glue positioning. The method can detect the offset of 6 degrees of freedom of the target workpiece at the same time, which has a wider application than the general 2D visual localization method. It can also be used for the localization of parts in other scenes.","PeriodicalId":184319,"journal":{"name":"Optical Frontiers","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optical Frontiers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2643961","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
A localization method based on monocular vision is proposed to solve the problem of poor flexibility, high cost and unstable accuracy of glue dispensing robot. The method includes the workpiece image feature extraction method based on distribution model and the optimized PNP algorithm based on depth calibration, which can locate the threedimensional coordinates of the workpiece and further generate the gluing track. Firstly, the layout and local coordinates of feature points are determined according to the workpiece model and gluing process, and the feature distribution model and template set are established. Then the image coordinates of feature points are extracted step by step by using workpiece contour features and image gray features, combining multi template and multi angle matching with shape detection, and using acceleration strategies such as image pyramid and angle layer by layer subdivision. Finally, the PNP algorithm is optimized in the Z direction through the depth calibration method to realize the high-precision positioning of the workpiece. The localization experiments of various types of reducer shells under different imaging environments were carried out. The experimental results show that the method has better feature extraction effect for workpieces with complex structure in chaotic environment, and the maximum localization error in one direction is within ± 0.5 mm, which meets the application needs of robot glue positioning. The method can detect the offset of 6 degrees of freedom of the target workpiece at the same time, which has a wider application than the general 2D visual localization method. It can also be used for the localization of parts in other scenes.