由人类演示和YOLOv5生成的自主机器人拣货平台

IF 2.4 3区 工程技术 Q3 ENGINEERING, MANUFACTURING
Jinho Park, C. Han, M. Jun, Huitaek Yun
{"title":"由人类演示和YOLOv5生成的自主机器人拣货平台","authors":"Jinho Park, C. Han, M. Jun, Huitaek Yun","doi":"10.1115/1.4063107","DOIUrl":null,"url":null,"abstract":"\n Vision-based robots have been utilized for pick-and-place operations by their ability to find object poses. As they progress into handling a variety of objects with cluttered state, more flexible and lightweight operations have been presented. In this paper, an autonomous robotic bin-picking platform which combines human demonstration with a collaborative robot for the flexibility of the objects and YOLOv5 neural network model for the faster object localization without prior CAD models or dataset in the training. After simple human demonstration of which target object to pick and place, the raw color and depth images were refined, and the one on top of the bin was utilized to create synthetic images and annotations for the YOLOv5 model. To pick up the target object, the point cloud was lifted using the depth data corresponding to the result of the trained YOLOv5 model, and the object pose was estimated through matching them by Iterative Closest Points (ICP) algorithm. After picking up the target object, the robot placed it where the user defined in the previous human demonstration stage. From the result of experiments with four types of objects and four human demonstrations, it took a total of 0.5 seconds to recognize the target object and estimate the object pose. The success rate of object detection was 95.6%, and the pick-and-place motion of all the found objects were successful.","PeriodicalId":16299,"journal":{"name":"Journal of Manufacturing Science and Engineering-transactions of The Asme","volume":" ","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2023-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Autonomous robotic bin picking platform generated from human demonstration and YOLOv5\",\"authors\":\"Jinho Park, C. Han, M. Jun, Huitaek Yun\",\"doi\":\"10.1115/1.4063107\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Vision-based robots have been utilized for pick-and-place operations by their ability to find object poses. As they progress into handling a variety of objects with cluttered state, more flexible and lightweight operations have been presented. In this paper, an autonomous robotic bin-picking platform which combines human demonstration with a collaborative robot for the flexibility of the objects and YOLOv5 neural network model for the faster object localization without prior CAD models or dataset in the training. After simple human demonstration of which target object to pick and place, the raw color and depth images were refined, and the one on top of the bin was utilized to create synthetic images and annotations for the YOLOv5 model. To pick up the target object, the point cloud was lifted using the depth data corresponding to the result of the trained YOLOv5 model, and the object pose was estimated through matching them by Iterative Closest Points (ICP) algorithm. After picking up the target object, the robot placed it where the user defined in the previous human demonstration stage. From the result of experiments with four types of objects and four human demonstrations, it took a total of 0.5 seconds to recognize the target object and estimate the object pose. The success rate of object detection was 95.6%, and the pick-and-place motion of all the found objects were successful.\",\"PeriodicalId\":16299,\"journal\":{\"name\":\"Journal of Manufacturing Science and Engineering-transactions of The Asme\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2023-08-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Manufacturing Science and Engineering-transactions of The Asme\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1115/1.4063107\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, MANUFACTURING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Manufacturing Science and Engineering-transactions of The Asme","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1115/1.4063107","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, MANUFACTURING","Score":null,"Total":0}
引用次数: 0

摘要

基于视觉的机器人已经被用于拾取和放置操作,因为它们有能力找到物体的姿势。随着它们发展到处理各种杂乱状态的对象,出现了更加灵活和轻量级的操作。本文提出了一种基于人工演示与协作机器人相结合的自动捡筒机器人平台,该平台具有物体的灵活性,而YOLOv5神经网络模型在训练中无需预先使用CAD模型或数据集即可实现更快的物体定位。在简单的人工演示要挑选和放置哪个目标对象之后,对原始颜色和深度图像进行细化,并使用bin顶部的图像为YOLOv5模型创建合成图像和注释。为了提取目标物体,利用训练好的YOLOv5模型结果对应的深度数据对点云进行提升,并通过迭代最近点(ICP)算法对点云进行匹配来估计目标物体的姿态。在拿起目标物体后,机器人将其放置在用户在之前的人类演示阶段中定义的位置。从四种物体类型和四次人体演示的实验结果来看,识别目标物体和估计物体姿态总共需要0.5秒。目标检测成功率为95.6%,发现的所有目标的拾取运动均成功。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Autonomous robotic bin picking platform generated from human demonstration and YOLOv5
Vision-based robots have been utilized for pick-and-place operations by their ability to find object poses. As they progress into handling a variety of objects with cluttered state, more flexible and lightweight operations have been presented. In this paper, an autonomous robotic bin-picking platform which combines human demonstration with a collaborative robot for the flexibility of the objects and YOLOv5 neural network model for the faster object localization without prior CAD models or dataset in the training. After simple human demonstration of which target object to pick and place, the raw color and depth images were refined, and the one on top of the bin was utilized to create synthetic images and annotations for the YOLOv5 model. To pick up the target object, the point cloud was lifted using the depth data corresponding to the result of the trained YOLOv5 model, and the object pose was estimated through matching them by Iterative Closest Points (ICP) algorithm. After picking up the target object, the robot placed it where the user defined in the previous human demonstration stage. From the result of experiments with four types of objects and four human demonstrations, it took a total of 0.5 seconds to recognize the target object and estimate the object pose. The success rate of object detection was 95.6%, and the pick-and-place motion of all the found objects were successful.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.80
自引率
20.00%
发文量
126
审稿时长
12 months
期刊介绍: Areas of interest including, but not limited to: Additive manufacturing; Advanced materials and processing; Assembly; Biomedical manufacturing; Bulk deformation processes (e.g., extrusion, forging, wire drawing, etc.); CAD/CAM/CAE; Computer-integrated manufacturing; Control and automation; Cyber-physical systems in manufacturing; Data science-enhanced manufacturing; Design for manufacturing; Electrical and electrochemical machining; Grinding and abrasive processes; Injection molding and other polymer fabrication processes; Inspection and quality control; Laser processes; Machine tool dynamics; Machining processes; Materials handling; Metrology; Micro- and nano-machining and processing; Modeling and simulation; Nontraditional manufacturing processes; Plant engineering and maintenance; Powder processing; Precision and ultra-precision machining; Process engineering; Process planning; Production systems optimization; Rapid prototyping and solid freeform fabrication; Robotics and flexible tooling; Sensing, monitoring, and diagnostics; Sheet and tube metal forming; Sustainable manufacturing; Tribology in manufacturing; Welding and joining
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信