Pei-Chi Huang, J. Lehman, A. Mok, R. Miikkulainen, L. Sentis
{"title":"Grasping novel objects with a dexterous robotic hand through neuroevolution","authors":"Pei-Chi Huang, J. Lehman, A. Mok, R. Miikkulainen, L. Sentis","doi":"10.1109/CICA.2014.7013242","DOIUrl":null,"url":null,"abstract":"Robotic grasping of a target object without advance knowledge of its three-dimensional model is a challenging problem. Many studies indicate that robot learning from demonstration (LfD) is a promising way to improve grasping performance, but complete automation of the grasping task in unforeseen circumstances remains difficult. As an alternative to LfD, this paper leverages limited human supervision to achieve robotic grasping of unknown objects in unforeseen circumstances. The technical question is what form of human supervision best minimizes the effort of the human supervisor. The approach here applies a human-supplied bounding box to focus the robot's visual processing on the target object, thereby lessening the dimensionality of the robot's computer vision processing. After the human supervisor defines the bounding box through the man-machine interface, the rest of the grasping task is automated through a vision-based feature-extraction approach where the dexterous hand learns to grasp objects without relying on pre-computed object models through the NEAT neuroevolution algorithm. Given only low-level sensing data from a commercial depth sensor Kinect, our approach evolves neural networks to identify appropriate hand positions and orientations for grasping novel objects. Further, the machine learning results from simulation have been validated by transferring the training results to a physical robot called Dreamer made by the Meka Robotics company. The results demonstrate that grasping novel objects through exploiting neuroevolution from simulation to reality is possible.","PeriodicalId":340740,"journal":{"name":"2014 IEEE Symposium on Computational Intelligence in Control and Automation (CICA)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Symposium on Computational Intelligence in Control and Automation (CICA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CICA.2014.7013242","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22
Abstract
Robotic grasping of a target object without advance knowledge of its three-dimensional model is a challenging problem. Many studies indicate that robot learning from demonstration (LfD) is a promising way to improve grasping performance, but complete automation of the grasping task in unforeseen circumstances remains difficult. As an alternative to LfD, this paper leverages limited human supervision to achieve robotic grasping of unknown objects in unforeseen circumstances. The technical question is what form of human supervision best minimizes the effort of the human supervisor. The approach here applies a human-supplied bounding box to focus the robot's visual processing on the target object, thereby lessening the dimensionality of the robot's computer vision processing. After the human supervisor defines the bounding box through the man-machine interface, the rest of the grasping task is automated through a vision-based feature-extraction approach where the dexterous hand learns to grasp objects without relying on pre-computed object models through the NEAT neuroevolution algorithm. Given only low-level sensing data from a commercial depth sensor Kinect, our approach evolves neural networks to identify appropriate hand positions and orientations for grasping novel objects. Further, the machine learning results from simulation have been validated by transferring the training results to a physical robot called Dreamer made by the Meka Robotics company. The results demonstrate that grasping novel objects through exploiting neuroevolution from simulation to reality is possible.