通过学习和分析融合增强复杂环境中的对映抓取模式

IF 6.4 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Tat Hieu Bui;Yeong Gwang Son;Juyong Hong;Yong Hyeon Kim;Hyouk Ryeol Choi
{"title":"通过学习和分析融合增强复杂环境中的对映抓取模式","authors":"Tat Hieu Bui;Yeong Gwang Son;Juyong Hong;Yong Hyeon Kim;Hyouk Ryeol Choi","doi":"10.1109/TASE.2024.3512005","DOIUrl":null,"url":null,"abstract":"The robotic pick-and-place system is applied widely in many fields such as assembly, packaging, bin-picking, and sorting. In this paper, we present a deep learning and analytical-based method for generating antipodal grasping multi-modality in highly cluttered scenes. Our method takes advantage of three types of grasp poses to deal with the complexity of environment and achieves efficient computation time for real applications. A new synthetic training datasets are generated in Isaac Sim including approximately 35000 RGB-D images and an automatic labeling algorithm is developed. We utilize convolutional neural networks (CNNs) for predicting antipodal grasping parameters on objects and a filtering algorithm to avoid collisions and calculate grasp’s depth simultaneously. Our approach processes the entire task in approximately 0.2 seconds, achieving a success rate of over 96% and more than 98% collision-free grasps in cluttered scenes. The method was verified by experiments with RB10 robot arm, two-fingers grippers, depth camera L515, and several objects in different scenes. The article shows a simple, effective, and highly applicable approach in real environments. The real experimental video of our method is shown at <uri>https://www.youtube.com/watch?v=GvJZxUyQr3w</uri>. Note to Practitioners—The robotic pick-and-place task is fundamental to the progress of automation. This article presents a novel method for efficiently picking objects from complex environments characterized by obstacles, overlaps, and a diversity of objects. These scenarios pose challenges in approaching objects and avoiding collisions. Our method combines learning and analytical approaches, demonstrating high performance in terms of accuracy, generalization across various scenes and grippers, and rapid computation, with the entire process taking approximately 0.2 seconds. These advancements are validated through a series of real robotic grasping experiments and comparisons with state-of-the-art methods. We believe that our method represents a significant contribution to the field of automation science and holds promise for real-world applications.","PeriodicalId":51060,"journal":{"name":"IEEE Transactions on Automation Science and Engineering","volume":"22 ","pages":"9767-9781"},"PeriodicalIF":6.4000,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing Antipodal Grasping Modalities in Complex Environments Through Learning and Analytical Fusion\",\"authors\":\"Tat Hieu Bui;Yeong Gwang Son;Juyong Hong;Yong Hyeon Kim;Hyouk Ryeol Choi\",\"doi\":\"10.1109/TASE.2024.3512005\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The robotic pick-and-place system is applied widely in many fields such as assembly, packaging, bin-picking, and sorting. In this paper, we present a deep learning and analytical-based method for generating antipodal grasping multi-modality in highly cluttered scenes. Our method takes advantage of three types of grasp poses to deal with the complexity of environment and achieves efficient computation time for real applications. A new synthetic training datasets are generated in Isaac Sim including approximately 35000 RGB-D images and an automatic labeling algorithm is developed. We utilize convolutional neural networks (CNNs) for predicting antipodal grasping parameters on objects and a filtering algorithm to avoid collisions and calculate grasp’s depth simultaneously. Our approach processes the entire task in approximately 0.2 seconds, achieving a success rate of over 96% and more than 98% collision-free grasps in cluttered scenes. The method was verified by experiments with RB10 robot arm, two-fingers grippers, depth camera L515, and several objects in different scenes. The article shows a simple, effective, and highly applicable approach in real environments. The real experimental video of our method is shown at <uri>https://www.youtube.com/watch?v=GvJZxUyQr3w</uri>. Note to Practitioners—The robotic pick-and-place task is fundamental to the progress of automation. This article presents a novel method for efficiently picking objects from complex environments characterized by obstacles, overlaps, and a diversity of objects. These scenarios pose challenges in approaching objects and avoiding collisions. Our method combines learning and analytical approaches, demonstrating high performance in terms of accuracy, generalization across various scenes and grippers, and rapid computation, with the entire process taking approximately 0.2 seconds. These advancements are validated through a series of real robotic grasping experiments and comparisons with state-of-the-art methods. We believe that our method represents a significant contribution to the field of automation science and holds promise for real-world applications.\",\"PeriodicalId\":51060,\"journal\":{\"name\":\"IEEE Transactions on Automation Science and Engineering\",\"volume\":\"22 \",\"pages\":\"9767-9781\"},\"PeriodicalIF\":6.4000,\"publicationDate\":\"2024-12-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Automation Science and Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10810737/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Automation Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10810737/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

机器人拾取系统广泛应用于装配、包装、拣箱、分拣等领域。在本文中,我们提出了一种基于深度学习和分析的方法,用于在高度混乱的场景中生成对映抓取多模态。该方法利用三种抓取姿态来处理环境的复杂性,在实际应用中实现了高效的计算时间。在Isaac Sim中生成了包含约35000张RGB-D图像的新的合成训练数据集,并开发了一种自动标记算法。我们利用卷积神经网络(cnn)来预测物体的对映抓取参数,并利用滤波算法来避免碰撞,同时计算抓取深度。我们的方法在大约0.2秒内处理整个任务,在混乱的场景中实现了超过96%的成功率和超过98%的无碰撞抓取。通过RB10机械臂、两指抓取器、深度相机L515以及不同场景下的多个对象的实验验证了该方法的有效性。本文展示了一种在实际环境中简单、有效且高度适用的方法。我们的方法的真实实验视频显示在https://www.youtube.com/watch?v=GvJZxUyQr3w。从业人员注意:机器人拾取和放置任务是自动化进程的基础。本文提出了一种从具有障碍物、重叠和物体多样性的复杂环境中有效地提取物体的新方法。这些场景对接近物体和避免碰撞提出了挑战。我们的方法结合了学习和分析方法,在准确性、各种场景和抓手的泛化以及快速计算方面表现出高性能,整个过程大约需要0.2秒。通过一系列真实的机器人抓取实验和与最先进方法的比较,验证了这些进步。我们相信我们的方法对自动化科学领域做出了重大贡献,并有望在现实世界中得到应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Enhancing Antipodal Grasping Modalities in Complex Environments Through Learning and Analytical Fusion
The robotic pick-and-place system is applied widely in many fields such as assembly, packaging, bin-picking, and sorting. In this paper, we present a deep learning and analytical-based method for generating antipodal grasping multi-modality in highly cluttered scenes. Our method takes advantage of three types of grasp poses to deal with the complexity of environment and achieves efficient computation time for real applications. A new synthetic training datasets are generated in Isaac Sim including approximately 35000 RGB-D images and an automatic labeling algorithm is developed. We utilize convolutional neural networks (CNNs) for predicting antipodal grasping parameters on objects and a filtering algorithm to avoid collisions and calculate grasp’s depth simultaneously. Our approach processes the entire task in approximately 0.2 seconds, achieving a success rate of over 96% and more than 98% collision-free grasps in cluttered scenes. The method was verified by experiments with RB10 robot arm, two-fingers grippers, depth camera L515, and several objects in different scenes. The article shows a simple, effective, and highly applicable approach in real environments. The real experimental video of our method is shown at https://www.youtube.com/watch?v=GvJZxUyQr3w. Note to Practitioners—The robotic pick-and-place task is fundamental to the progress of automation. This article presents a novel method for efficiently picking objects from complex environments characterized by obstacles, overlaps, and a diversity of objects. These scenarios pose challenges in approaching objects and avoiding collisions. Our method combines learning and analytical approaches, demonstrating high performance in terms of accuracy, generalization across various scenes and grippers, and rapid computation, with the entire process taking approximately 0.2 seconds. These advancements are validated through a series of real robotic grasping experiments and comparisons with state-of-the-art methods. We believe that our method represents a significant contribution to the field of automation science and holds promise for real-world applications.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Automation Science and Engineering
IEEE Transactions on Automation Science and Engineering 工程技术-自动化与控制系统
CiteScore
12.50
自引率
14.30%
发文量
404
审稿时长
3.0 months
期刊介绍: The IEEE Transactions on Automation Science and Engineering (T-ASE) publishes fundamental papers on Automation, emphasizing scientific results that advance efficiency, quality, productivity, and reliability. T-ASE encourages interdisciplinary approaches from computer science, control systems, electrical engineering, mathematics, mechanical engineering, operations research, and other fields. T-ASE welcomes results relevant to industries such as agriculture, biotechnology, healthcare, home automation, maintenance, manufacturing, pharmaceuticals, retail, security, service, supply chains, and transportation. T-ASE addresses a research community willing to integrate knowledge across disciplines and industries. For this purpose, each paper includes a Note to Practitioners that summarizes how its results can be applied or how they might be extended to apply in practice.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信