{"title":"从演示中学习视觉引导操作","authors":"Xueyi Chi, Huiliang Shang, Xiong-Zi Chen","doi":"10.1109/ICNSC55942.2022.10004126","DOIUrl":null,"url":null,"abstract":"Most of the commonly used learning target detection algorithms require a large amount of data sets and time for training, and if the target has to be changed, the network needs to be retrained. In response to this problem, we aim to build a vision-based grasping system, which acquires target features through multi-angle demonstration, and can select an appropriate matching method according to the geometric shape of the target to detect more accurately. The method involves improved template matching, comparing the means of BGR channels and shape parameter with the features from demonstration. Our improvements to the template matching algorithm solve the shortcomings of its inability to recognize rotated targets. We also combine 2D recognition with 3D point clouds to obtain the grasping point. It has been verified by simulation experiments that our vision guided manipulation system can learn and extract the target features through a few demonstrations, and select an appropriate method to detect the target, the robotic arm performs manipulations such as grasping the target.","PeriodicalId":230499,"journal":{"name":"2022 IEEE International Conference on Networking, Sensing and Control (ICNSC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Vision Guided Manipulation by Learning from Demonstration\",\"authors\":\"Xueyi Chi, Huiliang Shang, Xiong-Zi Chen\",\"doi\":\"10.1109/ICNSC55942.2022.10004126\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most of the commonly used learning target detection algorithms require a large amount of data sets and time for training, and if the target has to be changed, the network needs to be retrained. In response to this problem, we aim to build a vision-based grasping system, which acquires target features through multi-angle demonstration, and can select an appropriate matching method according to the geometric shape of the target to detect more accurately. The method involves improved template matching, comparing the means of BGR channels and shape parameter with the features from demonstration. Our improvements to the template matching algorithm solve the shortcomings of its inability to recognize rotated targets. We also combine 2D recognition with 3D point clouds to obtain the grasping point. It has been verified by simulation experiments that our vision guided manipulation system can learn and extract the target features through a few demonstrations, and select an appropriate method to detect the target, the robotic arm performs manipulations such as grasping the target.\",\"PeriodicalId\":230499,\"journal\":{\"name\":\"2022 IEEE International Conference on Networking, Sensing and Control (ICNSC)\",\"volume\":\"3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Networking, Sensing and Control (ICNSC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICNSC55942.2022.10004126\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Networking, Sensing and Control (ICNSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNSC55942.2022.10004126","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Vision Guided Manipulation by Learning from Demonstration
Most of the commonly used learning target detection algorithms require a large amount of data sets and time for training, and if the target has to be changed, the network needs to be retrained. In response to this problem, we aim to build a vision-based grasping system, which acquires target features through multi-angle demonstration, and can select an appropriate matching method according to the geometric shape of the target to detect more accurately. The method involves improved template matching, comparing the means of BGR channels and shape parameter with the features from demonstration. Our improvements to the template matching algorithm solve the shortcomings of its inability to recognize rotated targets. We also combine 2D recognition with 3D point clouds to obtain the grasping point. It has been verified by simulation experiments that our vision guided manipulation system can learn and extract the target features through a few demonstrations, and select an appropriate method to detect the target, the robotic arm performs manipulations such as grasping the target.