Grasp Detection for Assembly Robots Using High-fidelity Synthetic Data

Yeheng Chen, Nan Li, Jian Zhang, Wenxuan Chen, Yuehua Li, Haifeng Li
{"title":"Grasp Detection for Assembly Robots Using High-fidelity Synthetic Data","authors":"Yeheng Chen, Nan Li, Jian Zhang, Wenxuan Chen, Yuehua Li, Haifeng Li","doi":"10.1109/ICoSR57188.2022.00024","DOIUrl":null,"url":null,"abstract":"Artificial intelligence-driven collaborative robots (cobots) have attracted significant interest. Object perception is one of the important capabilities for robotic grasping in complex environments. Vision-based methods in the main perception tasks of robotic systems mostly require large pre-labeled training datasets. Building large-scale datasets that satisfy the conditions has always been a challenge in this field. In this work, we propose a robot vision system for robotic grasping tasks. The proposed system's primary design goal is to minimize the cost of human annotation during system setup. Moreover, since it is difficult to collect sufficient labeled training data, the existing methods are typically trained on real data that are highly correlated with test data. The system we presented includes a one-shot deep neural network trained with high-fidelity synthetic data based entirely on domain randomization to avoid collecting large amounts of human-annotated data and inaccurate annotation data in real world. At last, we build the vision system in the real environment and simulation with the robot operating system (ROS).","PeriodicalId":234590,"journal":{"name":"2022 International Conference on Service Robotics (ICoSR)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Service Robotics (ICoSR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICoSR57188.2022.00024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial intelligence-driven collaborative robots (cobots) have attracted significant interest. Object perception is one of the important capabilities for robotic grasping in complex environments. Vision-based methods in the main perception tasks of robotic systems mostly require large pre-labeled training datasets. Building large-scale datasets that satisfy the conditions has always been a challenge in this field. In this work, we propose a robot vision system for robotic grasping tasks. The proposed system's primary design goal is to minimize the cost of human annotation during system setup. Moreover, since it is difficult to collect sufficient labeled training data, the existing methods are typically trained on real data that are highly correlated with test data. The system we presented includes a one-shot deep neural network trained with high-fidelity synthetic data based entirely on domain randomization to avoid collecting large amounts of human-annotated data and inaccurate annotation data in real world. At last, we build the vision system in the real environment and simulation with the robot operating system (ROS).
基于高保真合成数据的装配机器人抓取检测
人工智能驱动的协作机器人(cobots)已经引起了人们的极大兴趣。物体感知是机器人在复杂环境中抓取的重要能力之一。在机器人系统的主要感知任务中,基于视觉的方法大多需要大量预标记的训练数据集。构建满足条件的大规模数据集一直是该领域的一个挑战。在这项工作中,我们提出了一个机器人视觉系统,用于机器人抓取任务。该系统的主要设计目标是在系统设置过程中最小化人工注释的成本。此外,由于难以收集到足够的标记训练数据,现有的方法通常是在与测试数据高度相关的真实数据上进行训练。为了避免在现实世界中收集大量的人工标注数据和不准确的标注数据,我们提出的系统包括一个由完全基于领域随机化的高保真合成数据训练的一次性深度神经网络。最后,利用机器人操作系统(ROS)在真实环境中构建了视觉系统并进行了仿真。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信