{"title":"3D Grasp Pose Generation from 2D Anchors and Local Surface","authors":"Hao Xu, Yangchang Sun, Qi Sun, Minghao Yang, Jinlong Chen, Baohua Qiang, Jinghong Wang","doi":"10.1145/3574131.3574453","DOIUrl":null,"url":null,"abstract":"This work proposes a three-dimensional (3D) robot grasp pose generation method for robot manipulators from the predicted two-dimensional (2D) anchors and the depth information of the local surface. Compared to the traditional image-based grasp area detection methods in which the grasp pose is only presented by two contact points, the proposed method can generate a more accurate 3D grasp pose. Furthermore, different from the 6-DoF object pose regression methods in which the point cloud of the whole objects is considered, the proposed method is very lightweight, since the 3D computation is only processed on the depth information of the local grasp surface. The method consists of three steps: (1) detecting the 2D grasp anchor and extracting the local grasp surface from the image; (2) obtaining the average vector of the objects’ local grasp surface from the objects’ local point cloud; (3) generating the 3D grasp pose from 2D grasp anchor based on the average vector of local grasp surface. The experiments are carried out on the Cornell and Jacquard grasp datasets. It is found that the proposed method yields improvement in the grasp accuracy compared to state-of-the-art 2D anchor methods. And the proposed method is also validated on the practical grasp tasks deployed on a UR5 arm with Robotiq Grippers F85. It outperforms state-of-the-art 2D anchor methods on the grasp success rate for dozens of practical grasp tasks.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3574131.3574453","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This work proposes a three-dimensional (3D) robot grasp pose generation method for robot manipulators from the predicted two-dimensional (2D) anchors and the depth information of the local surface. Compared to the traditional image-based grasp area detection methods in which the grasp pose is only presented by two contact points, the proposed method can generate a more accurate 3D grasp pose. Furthermore, different from the 6-DoF object pose regression methods in which the point cloud of the whole objects is considered, the proposed method is very lightweight, since the 3D computation is only processed on the depth information of the local grasp surface. The method consists of three steps: (1) detecting the 2D grasp anchor and extracting the local grasp surface from the image; (2) obtaining the average vector of the objects’ local grasp surface from the objects’ local point cloud; (3) generating the 3D grasp pose from 2D grasp anchor based on the average vector of local grasp surface. The experiments are carried out on the Cornell and Jacquard grasp datasets. It is found that the proposed method yields improvement in the grasp accuracy compared to state-of-the-art 2D anchor methods. And the proposed method is also validated on the practical grasp tasks deployed on a UR5 arm with Robotiq Grippers F85. It outperforms state-of-the-art 2D anchor methods on the grasp success rate for dozens of practical grasp tasks.