Object-Centric Grasping Transferability: Linking Meshes to Postures

Diego Hidalgo-Carvajal, Carlos Magno Catharino Olsson Valle, Abdeldjallil Naceri, S. Haddadin
{"title":"Object-Centric Grasping Transferability: Linking Meshes to Postures","authors":"Diego Hidalgo-Carvajal, Carlos Magno Catharino Olsson Valle, Abdeldjallil Naceri, S. Haddadin","doi":"10.1109/Humanoids53995.2022.10000192","DOIUrl":null,"url":null,"abstract":"Attaining human hand manipulation capabilities is a sought-after goal of robotic manipulation. Several works have focused on understanding and applying human manipulation insights in robotic applications. However, few considered objects as central pieces to increase the generalization properties of existing methods. In this study, we explore context-based grasping information transferability between objects by using mesh-based object representations. To do so, we empirically labeled, in a mesh point-wise manner, 10 grasping postures onto a set of 12 purposely selected objects. Subsequently, we trained our convolutional neural network (CNN) based architecture with the mesh representation of a single object, associating grasping postures to its local regions. We tested our network across multiple objects of distinct similarity values. Results show that our network can successfully estimate non-feasible grasping regions as well as feasible grasping postures. Our results suggest the existence of an abstract relation between the predicted context-based grasping postures and the geometrical properties of both the training and test objects. Our proposed approach aims to expand grasp learning research by linking local segmented meshes to postures. Such a concept can be applied to grasp new objects using anthropomorphic robot hands.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/Humanoids53995.2022.10000192","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Attaining human hand manipulation capabilities is a sought-after goal of robotic manipulation. Several works have focused on understanding and applying human manipulation insights in robotic applications. However, few considered objects as central pieces to increase the generalization properties of existing methods. In this study, we explore context-based grasping information transferability between objects by using mesh-based object representations. To do so, we empirically labeled, in a mesh point-wise manner, 10 grasping postures onto a set of 12 purposely selected objects. Subsequently, we trained our convolutional neural network (CNN) based architecture with the mesh representation of a single object, associating grasping postures to its local regions. We tested our network across multiple objects of distinct similarity values. Results show that our network can successfully estimate non-feasible grasping regions as well as feasible grasping postures. Our results suggest the existence of an abstract relation between the predicted context-based grasping postures and the geometrical properties of both the training and test objects. Our proposed approach aims to expand grasp learning research by linking local segmented meshes to postures. Such a concept can be applied to grasp new objects using anthropomorphic robot hands.
以对象为中心的抓取可转移性:将网格连接到姿势
获得人手操作能力是机器人操作的一个追求的目标。一些工作集中在理解和在机器人应用中应用人类操纵的见解。然而,很少有人将对象作为中心部分来增加现有方法的泛化特性。在这项研究中,我们通过使用基于网格的对象表示来探索基于上下文的抓取信息在对象之间的可传递性。为了做到这一点,我们以网格点的方式,在一组12个故意选择的对象上标记了10个抓取姿势。随后,我们使用单个对象的网格表示来训练基于卷积神经网络(CNN)的架构,将抓取姿势与其局部区域相关联。我们在具有不同相似值的多个对象上测试了我们的网络。结果表明,我们的网络可以成功地估计出非可行抓取区域和可行抓取姿态。我们的研究结果表明,预测的基于上下文的抓取姿势与训练和测试对象的几何特性之间存在抽象关系。我们提出的方法旨在通过将局部分割网格与姿势联系起来,扩大抓握学习的研究。这样的概念可以应用于用拟人化的机器人手抓取新的物体。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信