Predicting the robot's grip capacity on different objects using multi-object grasping

IF 2.1 Q3 ROBOTICS
Joseph Teguh Santoso, Mars Caroline Wibowo, Budi Raharjo
{"title":"Predicting the robot's grip capacity on different objects using multi-object grasping","authors":"Joseph Teguh Santoso, Mars Caroline Wibowo, Budi Raharjo","doi":"10.1007/s41315-024-00342-1","DOIUrl":null,"url":null,"abstract":"<p>This study explores the novel concept of Multi-Object Grasping (MOG) and develops an architecture based on autoencoders and transformers for accurate object prediction in MOG scenarios. The approach employs different deep learning methods and diverse training approaches using the ping pong ball dataset. The parameters obtained from this training enhance the model's performance on the actual system dataset, serving as the final test and validation of the model's reliability in real-world situations. Comparing the model's performance on both datasets facilitates validation and refinement, affirming its effectiveness in practical robotic applications. The study highlights that training various dataset features significantly improves prediction accuracy compared to the Naïve model using dense neural networks. Using five-time steps notably enhances prediction accuracy, especially with the GRU model in time-series data architecture, achieving a peak accuracy of 96%. While MOG has been extensively studied, this study introduces a novel architecture distinct from traditional visual methods. A framework is established that utilizes autoencoder and transformer technologies for managing tactile sensors, hand pose joint angles and force measurements. This approach demonstrates the potential for accurately predicting multiple objects in MOG scenarios.</p>","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":null,"pages":null},"PeriodicalIF":2.1000,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Robotics and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s41315-024-00342-1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

Abstract

This study explores the novel concept of Multi-Object Grasping (MOG) and develops an architecture based on autoencoders and transformers for accurate object prediction in MOG scenarios. The approach employs different deep learning methods and diverse training approaches using the ping pong ball dataset. The parameters obtained from this training enhance the model's performance on the actual system dataset, serving as the final test and validation of the model's reliability in real-world situations. Comparing the model's performance on both datasets facilitates validation and refinement, affirming its effectiveness in practical robotic applications. The study highlights that training various dataset features significantly improves prediction accuracy compared to the Naïve model using dense neural networks. Using five-time steps notably enhances prediction accuracy, especially with the GRU model in time-series data architecture, achieving a peak accuracy of 96%. While MOG has been extensively studied, this study introduces a novel architecture distinct from traditional visual methods. A framework is established that utilizes autoencoder and transformer technologies for managing tactile sensors, hand pose joint angles and force measurements. This approach demonstrates the potential for accurately predicting multiple objects in MOG scenarios.

Abstract Image

利用多物体抓取技术预测机器人对不同物体的抓取能力
本研究探索了多物体抓取(MOG)的新概念,并开发了一种基于自动编码器和变换器的架构,用于在 MOG 场景中准确预测物体。该方法采用了不同的深度学习方法和使用乒乓球数据集的多种训练方法。通过训练获得的参数可提高模型在实际系统数据集上的性能,从而最终测试和验证模型在现实世界中的可靠性。比较模型在两个数据集上的表现有助于验证和完善模型,肯定其在实际机器人应用中的有效性。研究强调,与使用密集神经网络的奈伊夫模型相比,训练各种数据集特征可显著提高预测准确性。使用五时间步显著提高了预测准确率,特别是在时间序列数据架构中使用 GRU 模型时,预测准确率达到了 96% 的峰值。虽然 MOG 已被广泛研究,但本研究引入了一种有别于传统视觉方法的新型架构。研究建立了一个框架,利用自动编码器和变压器技术来管理触觉传感器、手部姿势关节角度和力测量。这种方法展示了在 MOG 场景中准确预测多个物体的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
3.80
自引率
5.90%
发文量
50
期刊介绍: The International Journal of Intelligent Robotics and Applications (IJIRA) fosters the dissemination of new discoveries and novel technologies that advance developments in robotics and their broad applications. This journal provides a publication and communication platform for all robotics topics, from the theoretical fundamentals and technological advances to various applications including manufacturing, space vehicles, biomedical systems and automobiles, data-storage devices, healthcare systems, home appliances, and intelligent highways. IJIRA welcomes contributions from researchers, professionals and industrial practitioners. It publishes original, high-quality and previously unpublished research papers, brief reports, and critical reviews. Specific areas of interest include, but are not limited to:Advanced actuators and sensorsCollective and social robots Computing, communication and controlDesign, modeling and prototypingHuman and robot interactionMachine learning and intelligenceMobile robots and intelligent autonomous systemsMulti-sensor fusion and perceptionPlanning, navigation and localizationRobot intelligence, learning and linguisticsRobotic vision, recognition and reconstructionBio-mechatronics and roboticsCloud and Swarm roboticsCognitive and neuro roboticsExploration and security roboticsHealthcare, medical and assistive roboticsRobotics for intelligent manufacturingService, social and entertainment roboticsSpace and underwater robotsNovel and emerging applications
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信