{"title":"Continual Learning of Vacuum Grasps from Grasp Outcome for Unsupervised Domain Adaption","authors":"Maximilian Gilles, Vinzenz Rau","doi":"10.1109/RAAI56146.2022.10092970","DOIUrl":null,"url":null,"abstract":"Training grasping robots in isolation can result in large performance gaps when deploying to real world applications. This problem gains in importance when synthetic data is used for training. To meet the desired performance for a specific use-case, fine-tuning the model's parameters to account for the persistent domain shift between training and application data is usually required. To speed up deployment time and reduce costs, a picking robot should be able to continually adapt to its new domain by incorporating knowledge generated during operation. The proposed method enables a robot to perform domain adaption from source domain to target domain data completely selfsupervised by continually adapting its model's weights to the new target domain, relying only on feedback about grasp success or failure. It is based on two core ideas: 1) extrapolation of the suctionable area around a conducted grasp based on local curvature analysis of sensor data, and 2) uncertainty-weighted knowledge distillation-based pseudo labels for ambiguous background pixels for which no information about graspability is available from the current experiment. Extensive sim-to-real experiments on the challenging MetaGraspNet dataset show that the proposed method improves grasp success rate in average by more than 13% on real world scenes compared to purely synthetic training data.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RAAI56146.2022.10092970","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Training grasping robots in isolation can result in large performance gaps when deploying to real world applications. This problem gains in importance when synthetic data is used for training. To meet the desired performance for a specific use-case, fine-tuning the model's parameters to account for the persistent domain shift between training and application data is usually required. To speed up deployment time and reduce costs, a picking robot should be able to continually adapt to its new domain by incorporating knowledge generated during operation. The proposed method enables a robot to perform domain adaption from source domain to target domain data completely selfsupervised by continually adapting its model's weights to the new target domain, relying only on feedback about grasp success or failure. It is based on two core ideas: 1) extrapolation of the suctionable area around a conducted grasp based on local curvature analysis of sensor data, and 2) uncertainty-weighted knowledge distillation-based pseudo labels for ambiguous background pixels for which no information about graspability is available from the current experiment. Extensive sim-to-real experiments on the challenging MetaGraspNet dataset show that the proposed method improves grasp success rate in average by more than 13% on real world scenes compared to purely synthetic training data.