Joseph Teguh Santoso, Mars Caroline Wibowo, Budi Raharjo
{"title":"Predicting the robot's grip capacity on different objects using multi-object grasping","authors":"Joseph Teguh Santoso, Mars Caroline Wibowo, Budi Raharjo","doi":"10.1007/s41315-024-00342-1","DOIUrl":null,"url":null,"abstract":"<p>This study explores the novel concept of Multi-Object Grasping (MOG) and develops an architecture based on autoencoders and transformers for accurate object prediction in MOG scenarios. The approach employs different deep learning methods and diverse training approaches using the ping pong ball dataset. The parameters obtained from this training enhance the model's performance on the actual system dataset, serving as the final test and validation of the model's reliability in real-world situations. Comparing the model's performance on both datasets facilitates validation and refinement, affirming its effectiveness in practical robotic applications. The study highlights that training various dataset features significantly improves prediction accuracy compared to the Naïve model using dense neural networks. Using five-time steps notably enhances prediction accuracy, especially with the GRU model in time-series data architecture, achieving a peak accuracy of 96%. While MOG has been extensively studied, this study introduces a novel architecture distinct from traditional visual methods. A framework is established that utilizes autoencoder and transformer technologies for managing tactile sensors, hand pose joint angles and force measurements. This approach demonstrates the potential for accurately predicting multiple objects in MOG scenarios.</p>","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":"49 1","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Robotics and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s41315-024-00342-1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0
Abstract
This study explores the novel concept of Multi-Object Grasping (MOG) and develops an architecture based on autoencoders and transformers for accurate object prediction in MOG scenarios. The approach employs different deep learning methods and diverse training approaches using the ping pong ball dataset. The parameters obtained from this training enhance the model's performance on the actual system dataset, serving as the final test and validation of the model's reliability in real-world situations. Comparing the model's performance on both datasets facilitates validation and refinement, affirming its effectiveness in practical robotic applications. The study highlights that training various dataset features significantly improves prediction accuracy compared to the Naïve model using dense neural networks. Using five-time steps notably enhances prediction accuracy, especially with the GRU model in time-series data architecture, achieving a peak accuracy of 96%. While MOG has been extensively studied, this study introduces a novel architecture distinct from traditional visual methods. A framework is established that utilizes autoencoder and transformer technologies for managing tactile sensors, hand pose joint angles and force measurements. This approach demonstrates the potential for accurately predicting multiple objects in MOG scenarios.
期刊介绍:
The International Journal of Intelligent Robotics and Applications (IJIRA) fosters the dissemination of new discoveries and novel technologies that advance developments in robotics and their broad applications. This journal provides a publication and communication platform for all robotics topics, from the theoretical fundamentals and technological advances to various applications including manufacturing, space vehicles, biomedical systems and automobiles, data-storage devices, healthcare systems, home appliances, and intelligent highways. IJIRA welcomes contributions from researchers, professionals and industrial practitioners. It publishes original, high-quality and previously unpublished research papers, brief reports, and critical reviews. Specific areas of interest include, but are not limited to:Advanced actuators and sensorsCollective and social robots Computing, communication and controlDesign, modeling and prototypingHuman and robot interactionMachine learning and intelligenceMobile robots and intelligent autonomous systemsMulti-sensor fusion and perceptionPlanning, navigation and localizationRobot intelligence, learning and linguisticsRobotic vision, recognition and reconstructionBio-mechatronics and roboticsCloud and Swarm roboticsCognitive and neuro roboticsExploration and security roboticsHealthcare, medical and assistive roboticsRobotics for intelligent manufacturingService, social and entertainment roboticsSpace and underwater robotsNovel and emerging applications