Yeheng Chen, Nan Li, Jian Zhang, Wenxuan Chen, Yuehua Li, Haifeng Li
{"title":"基于高保真合成数据的装配机器人抓取检测","authors":"Yeheng Chen, Nan Li, Jian Zhang, Wenxuan Chen, Yuehua Li, Haifeng Li","doi":"10.1109/ICoSR57188.2022.00024","DOIUrl":null,"url":null,"abstract":"Artificial intelligence-driven collaborative robots (cobots) have attracted significant interest. Object perception is one of the important capabilities for robotic grasping in complex environments. Vision-based methods in the main perception tasks of robotic systems mostly require large pre-labeled training datasets. Building large-scale datasets that satisfy the conditions has always been a challenge in this field. In this work, we propose a robot vision system for robotic grasping tasks. The proposed system's primary design goal is to minimize the cost of human annotation during system setup. Moreover, since it is difficult to collect sufficient labeled training data, the existing methods are typically trained on real data that are highly correlated with test data. The system we presented includes a one-shot deep neural network trained with high-fidelity synthetic data based entirely on domain randomization to avoid collecting large amounts of human-annotated data and inaccurate annotation data in real world. At last, we build the vision system in the real environment and simulation with the robot operating system (ROS).","PeriodicalId":234590,"journal":{"name":"2022 International Conference on Service Robotics (ICoSR)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Grasp Detection for Assembly Robots Using High-fidelity Synthetic Data\",\"authors\":\"Yeheng Chen, Nan Li, Jian Zhang, Wenxuan Chen, Yuehua Li, Haifeng Li\",\"doi\":\"10.1109/ICoSR57188.2022.00024\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence-driven collaborative robots (cobots) have attracted significant interest. Object perception is one of the important capabilities for robotic grasping in complex environments. Vision-based methods in the main perception tasks of robotic systems mostly require large pre-labeled training datasets. Building large-scale datasets that satisfy the conditions has always been a challenge in this field. In this work, we propose a robot vision system for robotic grasping tasks. The proposed system's primary design goal is to minimize the cost of human annotation during system setup. Moreover, since it is difficult to collect sufficient labeled training data, the existing methods are typically trained on real data that are highly correlated with test data. The system we presented includes a one-shot deep neural network trained with high-fidelity synthetic data based entirely on domain randomization to avoid collecting large amounts of human-annotated data and inaccurate annotation data in real world. At last, we build the vision system in the real environment and simulation with the robot operating system (ROS).\",\"PeriodicalId\":234590,\"journal\":{\"name\":\"2022 International Conference on Service Robotics (ICoSR)\",\"volume\":\"65 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Conference on Service Robotics (ICoSR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICoSR57188.2022.00024\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Service Robotics (ICoSR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICoSR57188.2022.00024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Grasp Detection for Assembly Robots Using High-fidelity Synthetic Data
Artificial intelligence-driven collaborative robots (cobots) have attracted significant interest. Object perception is one of the important capabilities for robotic grasping in complex environments. Vision-based methods in the main perception tasks of robotic systems mostly require large pre-labeled training datasets. Building large-scale datasets that satisfy the conditions has always been a challenge in this field. In this work, we propose a robot vision system for robotic grasping tasks. The proposed system's primary design goal is to minimize the cost of human annotation during system setup. Moreover, since it is difficult to collect sufficient labeled training data, the existing methods are typically trained on real data that are highly correlated with test data. The system we presented includes a one-shot deep neural network trained with high-fidelity synthetic data based entirely on domain randomization to avoid collecting large amounts of human-annotated data and inaccurate annotation data in real world. At last, we build the vision system in the real environment and simulation with the robot operating system (ROS).