{"title":"Robot grasp synthesis from virtual demonstration and topology-preserving environment reconstruction","authors":"J. Aleotti, S. Caselli","doi":"10.1109/IROS.2007.4399011","DOIUrl":null,"url":null,"abstract":"Automatic environment modeling is an essential requirement for intelligent robots to execute manipulation tasks. Object recognition and workspace reconstruction also enable 3D user interaction and programming of assembly operations. In this paper a novel method for synthesizing robot grasps from demonstration is presented. The system allows learning and classification of human grasps demonstrated in virtual reality as well as teaching of robot grasps and simulation of manipulation tasks. Both virtual grasp demonstration and grasp synthesis take advantage of a topology-preserving approach for automatic workspace modeling with a monocular camera. The method is based on the computation of edge-face graphs. The algorithm works in real-time and shows high scalability in the number of objects thus allowing accurate reconstruction and registration from multiple views. Grasp synthesis is performed mimicking the human hand pre-grasp motion with data smoothing. Experiments reported in the paper have tested the capabilities of both the vision algorithm and the grasp synthesizer.","PeriodicalId":227148,"journal":{"name":"2007 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"20","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 IEEE/RSJ International Conference on Intelligent Robots and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IROS.2007.4399011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 20
Abstract
Automatic environment modeling is an essential requirement for intelligent robots to execute manipulation tasks. Object recognition and workspace reconstruction also enable 3D user interaction and programming of assembly operations. In this paper a novel method for synthesizing robot grasps from demonstration is presented. The system allows learning and classification of human grasps demonstrated in virtual reality as well as teaching of robot grasps and simulation of manipulation tasks. Both virtual grasp demonstration and grasp synthesis take advantage of a topology-preserving approach for automatic workspace modeling with a monocular camera. The method is based on the computation of edge-face graphs. The algorithm works in real-time and shows high scalability in the number of objects thus allowing accurate reconstruction and registration from multiple views. Grasp synthesis is performed mimicking the human hand pre-grasp motion with data smoothing. Experiments reported in the paper have tested the capabilities of both the vision algorithm and the grasp synthesizer.