{"title":"Pushing and grasping for autonomous learning of object models with foveated vision","authors":"Robert Bevec, A. Ude","doi":"10.1109/ICAR.2015.7251462","DOIUrl":null,"url":null,"abstract":"In this paper we address the problem of autonomous learning of visual appearance of unknown objects. We propose a method that integrates foveated vision on a humanoid robot with autonomous object discovery and explorative manipulation actions such as pushing, grasping, and in-hand rotation. The humanoid robot starts by searching for objects in a visual scene and generating hypotheses about which parts of the visual scene could constitute an object. The hypothetical objects are verified by applying pushing actions, where the existence of an object is considered confirmed if the visual features exhibit rigid body motion. In our previous work we showed that partial object models can be learnt by a sequential application of several robot pushes, which generates the views of object appearance from different viewpoints. However, with this approach it is not possible to guarantee that the object will be seen from all relevant viewpoints even after a large number of pushes have been carried out. Instead, in this paper we show that confirmed object hypotheses contain enough information to enable grasping and that object models can be acquired more effectively by sequentially rotating the object. We show the effectiveness of our new system by comparing object recognition results after the robot learns object models by two different approaches: 1. learning from images acquired by several pushes and 2. learning from images acquired by an initial push followed by several grasp-rotate-release action cycles.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Advanced Robotics (ICAR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAR.2015.7251462","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
In this paper we address the problem of autonomous learning of visual appearance of unknown objects. We propose a method that integrates foveated vision on a humanoid robot with autonomous object discovery and explorative manipulation actions such as pushing, grasping, and in-hand rotation. The humanoid robot starts by searching for objects in a visual scene and generating hypotheses about which parts of the visual scene could constitute an object. The hypothetical objects are verified by applying pushing actions, where the existence of an object is considered confirmed if the visual features exhibit rigid body motion. In our previous work we showed that partial object models can be learnt by a sequential application of several robot pushes, which generates the views of object appearance from different viewpoints. However, with this approach it is not possible to guarantee that the object will be seen from all relevant viewpoints even after a large number of pushes have been carried out. Instead, in this paper we show that confirmed object hypotheses contain enough information to enable grasping and that object models can be acquired more effectively by sequentially rotating the object. We show the effectiveness of our new system by comparing object recognition results after the robot learns object models by two different approaches: 1. learning from images acquired by several pushes and 2. learning from images acquired by an initial push followed by several grasp-rotate-release action cycles.