Zhenning Zhou , Han Sun , Xi Vincent Wang , Zhinan Zhang , Qixin Cao
{"title":"利用自动标注的大规模数据集,学习在杂波中准确高效地生成三指抓握动作","authors":"Zhenning Zhou , Han Sun , Xi Vincent Wang , Zhinan Zhang , Qixin Cao","doi":"10.1016/j.rcim.2024.102822","DOIUrl":null,"url":null,"abstract":"<div><p>With the development of intelligent manufacturing and robotic technologies, the capability of grasping unknown objects in unstructured environments is becoming more prominent for robots with extensive applications. However, current robotic three-finger grasping studies only focus on grasp generation for single objects or scattered scenes, and suffer from high time expenditure to label grasp ground truth, making them incapable of predicting grasp poses for cluttered objects or generating large-scale datasets. To address such limitations, we first introduce a novel three-finger grasp representation with fewer prediction dimensions, which balances the training difficulty and representation accuracy to obtain efficient grasp performance. Based on this representation, we develop an auto-annotation pipeline and contribute a large-scale three-finger grasp dataset (TF-Grasp Dataset). Our dataset contains 222,720 RGB-D images with over 2 billion grasp annotations in cluttered scenes. In addition, we also propose a three-finger grasp pose detection network (TF-GPD), which detects globally while fine-tuning locally to predict high-quality collision-free grasps from a single-view point cloud. In sum, our work addresses the issue of high-quality collision-free three-finger grasp generation in cluttered scenes based on the proposed pipeline. Extensive comparative experiments show that our proposed methodology outperforms previous methods and improves the grasp quality and efficiency in clutters. The superior results in real-world robot grasping experiments not only prove the reliability of our grasp model but also pave the way for practical applications of three-finger grasping. Our dataset and source code will be released.</p></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"91 ","pages":"Article 102822"},"PeriodicalIF":9.1000,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Learning accurate and efficient three-finger grasp generation in clutters with an auto-annotated large-scale dataset\",\"authors\":\"Zhenning Zhou , Han Sun , Xi Vincent Wang , Zhinan Zhang , Qixin Cao\",\"doi\":\"10.1016/j.rcim.2024.102822\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>With the development of intelligent manufacturing and robotic technologies, the capability of grasping unknown objects in unstructured environments is becoming more prominent for robots with extensive applications. However, current robotic three-finger grasping studies only focus on grasp generation for single objects or scattered scenes, and suffer from high time expenditure to label grasp ground truth, making them incapable of predicting grasp poses for cluttered objects or generating large-scale datasets. To address such limitations, we first introduce a novel three-finger grasp representation with fewer prediction dimensions, which balances the training difficulty and representation accuracy to obtain efficient grasp performance. Based on this representation, we develop an auto-annotation pipeline and contribute a large-scale three-finger grasp dataset (TF-Grasp Dataset). Our dataset contains 222,720 RGB-D images with over 2 billion grasp annotations in cluttered scenes. In addition, we also propose a three-finger grasp pose detection network (TF-GPD), which detects globally while fine-tuning locally to predict high-quality collision-free grasps from a single-view point cloud. In sum, our work addresses the issue of high-quality collision-free three-finger grasp generation in cluttered scenes based on the proposed pipeline. Extensive comparative experiments show that our proposed methodology outperforms previous methods and improves the grasp quality and efficiency in clutters. The superior results in real-world robot grasping experiments not only prove the reliability of our grasp model but also pave the way for practical applications of three-finger grasping. Our dataset and source code will be released.</p></div>\",\"PeriodicalId\":21452,\"journal\":{\"name\":\"Robotics and Computer-integrated Manufacturing\",\"volume\":\"91 \",\"pages\":\"Article 102822\"},\"PeriodicalIF\":9.1000,\"publicationDate\":\"2024-07-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Robotics and Computer-integrated Manufacturing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0736584524001091\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Robotics and Computer-integrated Manufacturing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0736584524001091","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Learning accurate and efficient three-finger grasp generation in clutters with an auto-annotated large-scale dataset
With the development of intelligent manufacturing and robotic technologies, the capability of grasping unknown objects in unstructured environments is becoming more prominent for robots with extensive applications. However, current robotic three-finger grasping studies only focus on grasp generation for single objects or scattered scenes, and suffer from high time expenditure to label grasp ground truth, making them incapable of predicting grasp poses for cluttered objects or generating large-scale datasets. To address such limitations, we first introduce a novel three-finger grasp representation with fewer prediction dimensions, which balances the training difficulty and representation accuracy to obtain efficient grasp performance. Based on this representation, we develop an auto-annotation pipeline and contribute a large-scale three-finger grasp dataset (TF-Grasp Dataset). Our dataset contains 222,720 RGB-D images with over 2 billion grasp annotations in cluttered scenes. In addition, we also propose a three-finger grasp pose detection network (TF-GPD), which detects globally while fine-tuning locally to predict high-quality collision-free grasps from a single-view point cloud. In sum, our work addresses the issue of high-quality collision-free three-finger grasp generation in cluttered scenes based on the proposed pipeline. Extensive comparative experiments show that our proposed methodology outperforms previous methods and improves the grasp quality and efficiency in clutters. The superior results in real-world robot grasping experiments not only prove the reliability of our grasp model but also pave the way for practical applications of three-finger grasping. Our dataset and source code will be released.
期刊介绍:
The journal, Robotics and Computer-Integrated Manufacturing, focuses on sharing research applications that contribute to the development of new or enhanced robotics, manufacturing technologies, and innovative manufacturing strategies that are relevant to industry. Papers that combine theory and experimental validation are preferred, while review papers on current robotics and manufacturing issues are also considered. However, papers on traditional machining processes, modeling and simulation, supply chain management, and resource optimization are generally not within the scope of the journal, as there are more appropriate journals for these topics. Similarly, papers that are overly theoretical or mathematical will be directed to other suitable journals. The journal welcomes original papers in areas such as industrial robotics, human-robot collaboration in manufacturing, cloud-based manufacturing, cyber-physical production systems, big data analytics in manufacturing, smart mechatronics, machine learning, adaptive and sustainable manufacturing, and other fields involving unique manufacturing technologies.