{"title":"基于虚拟空间的学习与任务执行视觉伺服","authors":"Tomoki Kawagoshi, S. Arnold, Kimitoshi Yamazaki","doi":"10.1109/IEEECONF49454.2021.9382691","DOIUrl":null,"url":null,"abstract":"In this paper, we describe a framework for per-forming an object picking task using visual servoing. As a robotic manipulator approaches the object to be grasped, a convolutional neural network (CNN) is used to generate motions to utilize visual servoing. However, to obtain an appropriate CNN, it is necessary to prepare a large amount of training data. Therefore, we propose a method that utilizes a virtual environment to reduce the load. Moreover, while performing the actual object picking task, sensor data acquisition and motion generation are performed using the virtual environment. This renders approaching the object possible even when the texture changes in the actual environment where the robot moves. An object grasping experiment was conducted on a rectangular box or a cylindrical object, and the performance of the proposed framework was verified.","PeriodicalId":395378,"journal":{"name":"2021 IEEE/SICE International Symposium on System Integration (SII)","volume":"126 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Visual Servoing Using Virtual Space for Both Learning and Task Execution\",\"authors\":\"Tomoki Kawagoshi, S. Arnold, Kimitoshi Yamazaki\",\"doi\":\"10.1109/IEEECONF49454.2021.9382691\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we describe a framework for per-forming an object picking task using visual servoing. As a robotic manipulator approaches the object to be grasped, a convolutional neural network (CNN) is used to generate motions to utilize visual servoing. However, to obtain an appropriate CNN, it is necessary to prepare a large amount of training data. Therefore, we propose a method that utilizes a virtual environment to reduce the load. Moreover, while performing the actual object picking task, sensor data acquisition and motion generation are performed using the virtual environment. This renders approaching the object possible even when the texture changes in the actual environment where the robot moves. An object grasping experiment was conducted on a rectangular box or a cylindrical object, and the performance of the proposed framework was verified.\",\"PeriodicalId\":395378,\"journal\":{\"name\":\"2021 IEEE/SICE International Symposium on System Integration (SII)\",\"volume\":\"126 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE/SICE International Symposium on System Integration (SII)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IEEECONF49454.2021.9382691\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/SICE International Symposium on System Integration (SII)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IEEECONF49454.2021.9382691","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Visual Servoing Using Virtual Space for Both Learning and Task Execution
In this paper, we describe a framework for per-forming an object picking task using visual servoing. As a robotic manipulator approaches the object to be grasped, a convolutional neural network (CNN) is used to generate motions to utilize visual servoing. However, to obtain an appropriate CNN, it is necessary to prepare a large amount of training data. Therefore, we propose a method that utilizes a virtual environment to reduce the load. Moreover, while performing the actual object picking task, sensor data acquisition and motion generation are performed using the virtual environment. This renders approaching the object possible even when the texture changes in the actual environment where the robot moves. An object grasping experiment was conducted on a rectangular box or a cylindrical object, and the performance of the proposed framework was verified.