Liangru Xie, Hongxiang Yu, Kechun Xu, Tong Yang, Minhang Wang, Haojian Lu, R. Xiong, Yue Wang
{"title":"Learning A Simulation-based Visual Policy for Real-world Peg In Unseen Holes","authors":"Liangru Xie, Hongxiang Yu, Kechun Xu, Tong Yang, Minhang Wang, Haojian Lu, R. Xiong, Yue Wang","doi":"10.48550/arXiv.2205.04297","DOIUrl":null,"url":null,"abstract":"This paper proposes a learning-based visual peg-in-hole that enables training with several shapes in simulation and adapting to arbitrary unseen shapes in the real world with minimal sim-to-real cost. The core idea is to decouple the generalization of the sensory-motor policy from the design of a fast-adaptable perception module and a simulated generic policy module. The framework consists of a segmentation network (SN), a virtual sensor network (VSN), and a controller network (CN). Concretely, the VSN is trained to measure the pose of the unseen shape from a segmented image. After that, given the shape-agnostic pose measurement, the CN is trained to achieve a generic peg-in-hole. Finally, when applying to real unseen holes, we only have to fine-tune the SN required by the simulated VSN + CN. To further minimize the transfer cost, we propose to automatically collect and annotate the data for the SN after one-minute human teaching. Simulated and real-world results are presented under the configuration of eye-to/in-hand. An electric vehicle charging system with the proposed policy inside achieves a 10/10 success rate in 2-3 s, using only hundreds of auto-labeled samples for the SN transfer.","PeriodicalId":54761,"journal":{"name":"Journal of the Optical Society of America and Review of Scientific Instruments","volume":"150 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the Optical Society of America and Review of Scientific Instruments","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2205.04297","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper proposes a learning-based visual peg-in-hole that enables training with several shapes in simulation and adapting to arbitrary unseen shapes in the real world with minimal sim-to-real cost. The core idea is to decouple the generalization of the sensory-motor policy from the design of a fast-adaptable perception module and a simulated generic policy module. The framework consists of a segmentation network (SN), a virtual sensor network (VSN), and a controller network (CN). Concretely, the VSN is trained to measure the pose of the unseen shape from a segmented image. After that, given the shape-agnostic pose measurement, the CN is trained to achieve a generic peg-in-hole. Finally, when applying to real unseen holes, we only have to fine-tune the SN required by the simulated VSN + CN. To further minimize the transfer cost, we propose to automatically collect and annotate the data for the SN after one-minute human teaching. Simulated and real-world results are presented under the configuration of eye-to/in-hand. An electric vehicle charging system with the proposed policy inside achieves a 10/10 success rate in 2-3 s, using only hundreds of auto-labeled samples for the SN transfer.