3D Grasp Pose Generation from 2D Anchors and Local Surface

Hao Xu, Yangchang Sun, Qi Sun, Minghao Yang, Jinlong Chen, Baohua Qiang, Jinghong Wang
{"title":"3D Grasp Pose Generation from 2D Anchors and Local Surface","authors":"Hao Xu, Yangchang Sun, Qi Sun, Minghao Yang, Jinlong Chen, Baohua Qiang, Jinghong Wang","doi":"10.1145/3574131.3574453","DOIUrl":null,"url":null,"abstract":"This work proposes a three-dimensional (3D) robot grasp pose generation method for robot manipulators from the predicted two-dimensional (2D) anchors and the depth information of the local surface. Compared to the traditional image-based grasp area detection methods in which the grasp pose is only presented by two contact points, the proposed method can generate a more accurate 3D grasp pose. Furthermore, different from the 6-DoF object pose regression methods in which the point cloud of the whole objects is considered, the proposed method is very lightweight, since the 3D computation is only processed on the depth information of the local grasp surface. The method consists of three steps: (1) detecting the 2D grasp anchor and extracting the local grasp surface from the image; (2) obtaining the average vector of the objects’ local grasp surface from the objects’ local point cloud; (3) generating the 3D grasp pose from 2D grasp anchor based on the average vector of local grasp surface. The experiments are carried out on the Cornell and Jacquard grasp datasets. It is found that the proposed method yields improvement in the grasp accuracy compared to state-of-the-art 2D anchor methods. And the proposed method is also validated on the practical grasp tasks deployed on a UR5 arm with Robotiq Grippers F85. It outperforms state-of-the-art 2D anchor methods on the grasp success rate for dozens of practical grasp tasks.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3574131.3574453","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This work proposes a three-dimensional (3D) robot grasp pose generation method for robot manipulators from the predicted two-dimensional (2D) anchors and the depth information of the local surface. Compared to the traditional image-based grasp area detection methods in which the grasp pose is only presented by two contact points, the proposed method can generate a more accurate 3D grasp pose. Furthermore, different from the 6-DoF object pose regression methods in which the point cloud of the whole objects is considered, the proposed method is very lightweight, since the 3D computation is only processed on the depth information of the local grasp surface. The method consists of three steps: (1) detecting the 2D grasp anchor and extracting the local grasp surface from the image; (2) obtaining the average vector of the objects’ local grasp surface from the objects’ local point cloud; (3) generating the 3D grasp pose from 2D grasp anchor based on the average vector of local grasp surface. The experiments are carried out on the Cornell and Jacquard grasp datasets. It is found that the proposed method yields improvement in the grasp accuracy compared to state-of-the-art 2D anchor methods. And the proposed method is also validated on the practical grasp tasks deployed on a UR5 arm with Robotiq Grippers F85. It outperforms state-of-the-art 2D anchor methods on the grasp success rate for dozens of practical grasp tasks.
从2D锚点和局部表面生成3D抓取姿势
本工作提出了一种基于预测的二维锚点和局部表面深度信息的机器人机械手三维(3D)抓取姿态生成方法。与传统的基于图像的抓取区域检测方法中抓取姿态仅由两个接触点表示相比,该方法可以生成更精确的三维抓取姿态。此外,与考虑整个目标点云的六自由度目标位姿回归方法不同,该方法仅对局部抓握面深度信息进行三维计算,具有轻量级特点。该方法分为三个步骤:(1)检测二维抓取锚点并从图像中提取局部抓取面;(2)从目标局部点云中获取目标局部抓握面平均向量;(3)基于局部抓取面平均向量,由二维抓取锚生成三维抓取姿态。实验分别在Cornell和Jacquard抓取数据集上进行。研究发现,与最先进的二维锚方法相比,所提出的方法提高了抓取精度。并通过Robotiq Grippers F85在UR5手臂上部署的实际抓取任务验证了所提出的方法。在数十个实际抓取任务的抓取成功率上,它优于最先进的2D锚方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信