Vision Guided Manipulation by Learning from Demonstration

Xueyi Chi, Huiliang Shang, Xiong-Zi Chen
{"title":"Vision Guided Manipulation by Learning from Demonstration","authors":"Xueyi Chi, Huiliang Shang, Xiong-Zi Chen","doi":"10.1109/ICNSC55942.2022.10004126","DOIUrl":null,"url":null,"abstract":"Most of the commonly used learning target detection algorithms require a large amount of data sets and time for training, and if the target has to be changed, the network needs to be retrained. In response to this problem, we aim to build a vision-based grasping system, which acquires target features through multi-angle demonstration, and can select an appropriate matching method according to the geometric shape of the target to detect more accurately. The method involves improved template matching, comparing the means of BGR channels and shape parameter with the features from demonstration. Our improvements to the template matching algorithm solve the shortcomings of its inability to recognize rotated targets. We also combine 2D recognition with 3D point clouds to obtain the grasping point. It has been verified by simulation experiments that our vision guided manipulation system can learn and extract the target features through a few demonstrations, and select an appropriate method to detect the target, the robotic arm performs manipulations such as grasping the target.","PeriodicalId":230499,"journal":{"name":"2022 IEEE International Conference on Networking, Sensing and Control (ICNSC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Networking, Sensing and Control (ICNSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNSC55942.2022.10004126","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Most of the commonly used learning target detection algorithms require a large amount of data sets and time for training, and if the target has to be changed, the network needs to be retrained. In response to this problem, we aim to build a vision-based grasping system, which acquires target features through multi-angle demonstration, and can select an appropriate matching method according to the geometric shape of the target to detect more accurately. The method involves improved template matching, comparing the means of BGR channels and shape parameter with the features from demonstration. Our improvements to the template matching algorithm solve the shortcomings of its inability to recognize rotated targets. We also combine 2D recognition with 3D point clouds to obtain the grasping point. It has been verified by simulation experiments that our vision guided manipulation system can learn and extract the target features through a few demonstrations, and select an appropriate method to detect the target, the robotic arm performs manipulations such as grasping the target.
从演示中学习视觉引导操作
大多数常用的学习目标检测算法都需要大量的数据集和时间进行训练,如果必须改变目标,则需要对网络进行重新训练。针对这一问题,我们的目标是建立一个基于视觉的抓取系统,该系统通过多角度展示获取目标特征,并能根据目标的几何形状选择合适的匹配方法进行更准确的检测。该方法包括改进模板匹配,将BGR通道和形状参数的方法与演示的特征进行比较。我们对模板匹配算法进行了改进,解决了其无法识别旋转目标的缺点。我们还将二维识别与三维点云相结合来获得抓取点。仿真实验验证了我们的视觉引导操作系统可以通过几次演示学习和提取目标特征,并选择合适的方法检测目标,机械臂执行抓取目标等操作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信