An efficient pose classification method for robotic grasping

Cobot Pub Date : 2022-03-03 DOI:10.12688/cobot.17440.1
Wenlong Ji, Yunhan Lin, Huasong Min
{"title":"An efficient pose classification method for robotic grasping","authors":"Wenlong Ji, Yunhan Lin, Huasong Min","doi":"10.12688/cobot.17440.1","DOIUrl":null,"url":null,"abstract":"Background: The unstructured environment, the different geometric shapes of objects, and the uncertainty of sensor noise have brought many challenges to robotic grasping. PointNetGPD (Grasp Pose Detection) which was published in 2019 proposes a point cloud-based grasping pose detection method, which detects reliable grasping poses from the point cloud, and provides an effective process to generate and evaluate grasping poses. However, PointNetGPD uses the point cloud inside the parallel-gripper and the network only uses three channels of information when classifying grasping poses. Methods: In order to improve the accuracy of grasping pose classification, the concept of grasping confidence region was proposed in this paper, which shows the hotspot area of the object can be grasped successfully, and there will be higher success rate when performing grasping in this area. Based on the concept of grasping confidence regions, the grasping dataset in PointNetGPD is improved, which can provide richer information to the classification network. Using our dataset, we trained a scoring network that can score the point cloud collected by the depth camera. We added this scoring network to the classification network of PointNetGPD, and carried out the experiment of grasping poses classification. Results: The experimental results show that the classification accuracy increases by 4% after calculating the score channel on the original dataset; the classification accuracy increases by nearly 1% after using the trained scoring network to score the original dataset. Conclusions: The concept of positive grasp center area is proposed in this paper. Based on this concept, we improve the dataset in PointNetGPD, and use this dataset to train a scoring network to add the score information to the point cloud. The experiments show that our proposed method can effectively improve the accuracy of grasping poses classification network.","PeriodicalId":29807,"journal":{"name":"Cobot","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cobot","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.12688/cobot.17440.1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Background: The unstructured environment, the different geometric shapes of objects, and the uncertainty of sensor noise have brought many challenges to robotic grasping. PointNetGPD (Grasp Pose Detection) which was published in 2019 proposes a point cloud-based grasping pose detection method, which detects reliable grasping poses from the point cloud, and provides an effective process to generate and evaluate grasping poses. However, PointNetGPD uses the point cloud inside the parallel-gripper and the network only uses three channels of information when classifying grasping poses. Methods: In order to improve the accuracy of grasping pose classification, the concept of grasping confidence region was proposed in this paper, which shows the hotspot area of the object can be grasped successfully, and there will be higher success rate when performing grasping in this area. Based on the concept of grasping confidence regions, the grasping dataset in PointNetGPD is improved, which can provide richer information to the classification network. Using our dataset, we trained a scoring network that can score the point cloud collected by the depth camera. We added this scoring network to the classification network of PointNetGPD, and carried out the experiment of grasping poses classification. Results: The experimental results show that the classification accuracy increases by 4% after calculating the score channel on the original dataset; the classification accuracy increases by nearly 1% after using the trained scoring network to score the original dataset. Conclusions: The concept of positive grasp center area is proposed in this paper. Based on this concept, we improve the dataset in PointNetGPD, and use this dataset to train a scoring network to add the score information to the point cloud. The experiments show that our proposed method can effectively improve the accuracy of grasping poses classification network.
一种有效的机器人抓取姿态分类方法
背景:非结构化环境、物体几何形状的差异以及传感器噪声的不确定性给机器人抓取带来了诸多挑战。2019年发表的PointNetGPD (Grasp Pose Detection)提出了一种基于点云的抓取姿态检测方法,该方法从点云中检测出可靠的抓取姿态,并提供了一种有效的抓取姿态生成和评估过程。然而,PointNetGPD使用了并行抓取器内部的点云,并且在抓取姿势分类时只使用了三个信息通道。方法:为了提高抓取姿态分类的准确性,本文提出了抓取置信区域的概念,表示物体的热点区域能够被成功抓取,在该区域进行抓取时成功率会更高。基于抓取置信区域的概念,对PointNetGPD中的抓取数据集进行了改进,可以为分类网络提供更丰富的信息。利用我们的数据集,我们训练了一个评分网络,可以对深度相机收集的点云进行评分。将该评分网络加入到PointNetGPD分类网络中,进行抓取姿势分类实验。结果:实验结果表明,在原始数据集上计算分数通道后,分类准确率提高了4%;使用训练好的评分网络对原始数据集进行评分后,分类准确率提高了近1%。结论:提出了正抓心面积的概念。基于这一概念,我们改进了PointNetGPD中的数据集,并使用该数据集训练得分网络,将得分信息添加到点云中。实验结果表明,该方法能有效提高抓取姿态分类网络的准确率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Cobot
Cobot collaborative robots-
自引率
0.00%
发文量
0
期刊介绍: Cobot is a rapid multidisciplinary open access publishing platform for research focused on the interdisciplinary field of collaborative robots. The aim of Cobot is to enhance knowledge and share the results of the latest innovative technologies for the technicians, researchers and experts engaged in collaborative robot research. The platform will welcome submissions in all areas of scientific and technical research related to collaborative robots, and all articles will benefit from open peer review. The scope of Cobot includes, but is not limited to: ● Intelligent robots ● Artificial intelligence ● Human-machine collaboration and integration ● Machine vision ● Intelligent sensing ● Smart materials ● Design, development and testing of collaborative robots ● Software for cobots ● Industrial applications of cobots ● Service applications of cobots ● Medical and health applications of cobots ● Educational applications of cobots As well as research articles and case studies, Cobot accepts a variety of article types including method articles, study protocols, software tools, systematic reviews, data notes, brief reports, and opinion articles.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信