{"title":"IPGD: A Dataset for Robotic Inside-Propped Grasp Detection","authors":"Xuefeng Liu, Guangjian Zhang","doi":"10.1109/ACAIT56212.2022.10137845","DOIUrl":null,"url":null,"abstract":"Grasping skills are the basic skills required by robots in many practical applications. Recent research on robotic grasping detection generally focuses on grasping poses similar to human grasping. However, this grasping pose is not suitable for all grasping scenarios in practical applications. Therefore, this paper uses a new inside-propped grasping pose to label a large number of images with inside-propped grasping potential. In this way, an inside-propped grasp dataset is completed. Based on this dataset, this paper constructs a generative deep neural network for the inside-propped grasping prediction. The experimental results show that the success rate of the inside-propped grasping prediction network is 65.59%, and the average prediction time is 82ms, which has achieved good results in accuracy and real-time performance.","PeriodicalId":398228,"journal":{"name":"2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACAIT56212.2022.10137845","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Grasping skills are the basic skills required by robots in many practical applications. Recent research on robotic grasping detection generally focuses on grasping poses similar to human grasping. However, this grasping pose is not suitable for all grasping scenarios in practical applications. Therefore, this paper uses a new inside-propped grasping pose to label a large number of images with inside-propped grasping potential. In this way, an inside-propped grasp dataset is completed. Based on this dataset, this paper constructs a generative deep neural network for the inside-propped grasping prediction. The experimental results show that the success rate of the inside-propped grasping prediction network is 65.59%, and the average prediction time is 82ms, which has achieved good results in accuracy and real-time performance.