Collision-Free Grasp Detection From Color and Depth Images

Dinh-Cuong Hoang;Anh-Nhat Nguyen;Chi-Minh Nguyen;An-Binh Phi;Quang-Tri Duong;Khanh-Duong Tran;Viet-Anh Trinh;Van-Duc Tran;Hai-Nam Pham;Phuc-Quan Ngo;Duy-Quang Vu;Thu-Uyen Nguyen;Van-Duc Vu;Duc-Thanh Tran;Van-Thiep Nguyen
{"title":"Collision-Free Grasp Detection From Color and Depth Images","authors":"Dinh-Cuong Hoang;Anh-Nhat Nguyen;Chi-Minh Nguyen;An-Binh Phi;Quang-Tri Duong;Khanh-Duong Tran;Viet-Anh Trinh;Van-Duc Tran;Hai-Nam Pham;Phuc-Quan Ngo;Duy-Quang Vu;Thu-Uyen Nguyen;Van-Duc Vu;Duc-Thanh Tran;Van-Thiep Nguyen","doi":"10.1109/TAI.2024.3420848","DOIUrl":null,"url":null,"abstract":"Efficient and reliable grasp pose generation plays a crucial role in robotic manipulation tasks. The advancement of deep learning techniques applied to point cloud data has led to rapid progress in grasp detection. However, point cloud data has limitations: no appearance information and susceptibility to sensor noise. In contrast, color Red, Green, Blue (RGB) images offer high-resolution and intricate textural details, making them a valuable complement to the 3-D geometry offered by point clouds or depth (D) images. Nevertheless, the effective integration of appearance information to enhance point cloud-based grasp detection remains an open question. In this study, we extend the concepts of VoteGrasp \n<xref>[1]</xref>\n and introduce an innovative deep learning approach referred to as VoteGrasp Red, Green, Blue, Depth (RGBD). To build robustness to occlusion, the proposed model generates candidates by casting votes and accumulating evidence for feasible grasp configurations. This methodology revolves around fuzing votes extracted from images and point clouds. To further enhance the collaborative effect of merging appearance and geometry features, we introduce a context learning module. We exploit contextual information by encoding the dependency of objects in the scene into features to boost the performance of grasp generation. The contextual information enables our model to increase the likelihood that the generated grasps are collision-free. The efficacy of our model is verified through comprehensive evaluations on the demanding GraspNet-1Billion dataset, leading to a significant improvement of 9.3 in average precision (AP) over the existing state-of-the-art results. Additionally, we provide extensive analyses through ablation studies to elucidate the contributions of each design decision.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 11","pages":"5689-5698"},"PeriodicalIF":0.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10579461/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Efficient and reliable grasp pose generation plays a crucial role in robotic manipulation tasks. The advancement of deep learning techniques applied to point cloud data has led to rapid progress in grasp detection. However, point cloud data has limitations: no appearance information and susceptibility to sensor noise. In contrast, color Red, Green, Blue (RGB) images offer high-resolution and intricate textural details, making them a valuable complement to the 3-D geometry offered by point clouds or depth (D) images. Nevertheless, the effective integration of appearance information to enhance point cloud-based grasp detection remains an open question. In this study, we extend the concepts of VoteGrasp [1] and introduce an innovative deep learning approach referred to as VoteGrasp Red, Green, Blue, Depth (RGBD). To build robustness to occlusion, the proposed model generates candidates by casting votes and accumulating evidence for feasible grasp configurations. This methodology revolves around fuzing votes extracted from images and point clouds. To further enhance the collaborative effect of merging appearance and geometry features, we introduce a context learning module. We exploit contextual information by encoding the dependency of objects in the scene into features to boost the performance of grasp generation. The contextual information enables our model to increase the likelihood that the generated grasps are collision-free. The efficacy of our model is verified through comprehensive evaluations on the demanding GraspNet-1Billion dataset, leading to a significant improvement of 9.3 in average precision (AP) over the existing state-of-the-art results. Additionally, we provide extensive analyses through ablation studies to elucidate the contributions of each design decision.
从彩色和深度图像进行无碰撞抓取检测
高效可靠的抓取姿势生成在机器人操纵任务中发挥着至关重要的作用。深度学习技术在点云数据上的应用使抓取检测技术取得了飞速发展。然而,点云数据有其局限性:没有外观信息,易受传感器噪声的影响。相比之下,红绿蓝(RGB)彩色图像具有高分辨率和复杂的纹理细节,是点云或深度(D)图像所提供的三维几何图形的重要补充。然而,如何有效整合外观信息以增强基于点云的抓取检测仍是一个未决问题。在本研究中,我们扩展了 VoteGrasp [1] 的概念,并引入了一种创新的深度学习方法,即 VoteGrasp Red, Green, Blue, Depth (RGBD)。为了建立对遮挡的鲁棒性,所提出的模型通过投票和积累可行抓取配置的证据来生成候选对象。这种方法围绕着从图像和点云中提取的选票进行。为了进一步增强合并外观和几何特征的协同效应,我们引入了上下文学习模块。我们利用上下文信息,将场景中物体的依赖性编码为特征,从而提高抓取生成的性能。上下文信息使我们的模型能够提高生成的抓手无碰撞的可能性。我们在要求苛刻的 GraspNet-1Billion 数据集上进行了全面评估,验证了我们模型的功效,与现有的最先进结果相比,平均精度(AP)显著提高了 9.3。此外,我们还通过消融研究进行了广泛分析,以阐明每个设计决策的贡献。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信