Satoshi Funabashi, A. Schmitz, Takashi Sato, S. Somlor, S. Sugano
{"title":"Versatile In-Hand Manipulation of Objects with Different Sizes and Shapes Using Neural Networks","authors":"Satoshi Funabashi, A. Schmitz, Takashi Sato, S. Somlor, S. Sugano","doi":"10.1109/HUMANOIDS.2018.8624961","DOIUrl":null,"url":null,"abstract":"Changing the grasping posture of objects within a robot hand is hard to achieve, especially if the objects are of various shape and size. In this paper we use a neural network to learn such manipulation with variously sized and shaped objects. The TWENDY-ONE hand possesses various properties that are effective for in-hand manipulation: a high number of actuated joints, passive degrees of freedom and soft skin, six-axis force/torque (F /T) sensors in each fingertip and distributed tactile sensors in the soft skin. The object size information is extracted from the initial grasping posture. The training data includes tactile and the object information. After training the neural network, the robot is able to manipulate objects of not only trained but also untrained size and shape. The results show the importance of size and tactile information. Importantly, the features extracted by a stacked autoencoder (trained with a larger dataset) could reduce the number of required training samples for supervised learning of in-hand manipulation.","PeriodicalId":433345,"journal":{"name":"2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HUMANOIDS.2018.8624961","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Changing the grasping posture of objects within a robot hand is hard to achieve, especially if the objects are of various shape and size. In this paper we use a neural network to learn such manipulation with variously sized and shaped objects. The TWENDY-ONE hand possesses various properties that are effective for in-hand manipulation: a high number of actuated joints, passive degrees of freedom and soft skin, six-axis force/torque (F /T) sensors in each fingertip and distributed tactile sensors in the soft skin. The object size information is extracted from the initial grasping posture. The training data includes tactile and the object information. After training the neural network, the robot is able to manipulate objects of not only trained but also untrained size and shape. The results show the importance of size and tactile information. Importantly, the features extracted by a stacked autoencoder (trained with a larger dataset) could reduce the number of required training samples for supervised learning of in-hand manipulation.