{"title":"基于RL的观察模仿和基于图的演示表示","authors":"Y. Manyari, P. Callet, Laurent Dollé","doi":"10.1109/ICMLA55696.2022.00202","DOIUrl":null,"url":null,"abstract":"Teaching robots behavioral skills by leveraging examples provided by an expert, also referred to as Imitation Learning from Observation (IfO or ILO), is a promising approach for learning novel tasks without requiring a task-specific reward function to be engineered. We propose a RL-based framework to teach robots manipulation tasks given expert observation-only demonstrations. First, a representation model is trained to extract spatial and temporal features from demonstrations. Graph Neural Networks (GNNs) are used to encode spatial patterns, while LSTMs and Transformers are used to encode temporal features. Second, based on an off-the-shelf RL algorithm, the demonstrations are leveraged through the trained representation to guide the policy training towards solving the task demonstrated by the expert. We show that our approach compares favorably to state-of-the-art IfO algorithms with a 99% success rate and transfers well to the real world.","PeriodicalId":128160,"journal":{"name":"2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Imitation from Observation using RL and Graph-based Representation of Demonstrations\",\"authors\":\"Y. Manyari, P. Callet, Laurent Dollé\",\"doi\":\"10.1109/ICMLA55696.2022.00202\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Teaching robots behavioral skills by leveraging examples provided by an expert, also referred to as Imitation Learning from Observation (IfO or ILO), is a promising approach for learning novel tasks without requiring a task-specific reward function to be engineered. We propose a RL-based framework to teach robots manipulation tasks given expert observation-only demonstrations. First, a representation model is trained to extract spatial and temporal features from demonstrations. Graph Neural Networks (GNNs) are used to encode spatial patterns, while LSTMs and Transformers are used to encode temporal features. Second, based on an off-the-shelf RL algorithm, the demonstrations are leveraged through the trained representation to guide the policy training towards solving the task demonstrated by the expert. We show that our approach compares favorably to state-of-the-art IfO algorithms with a 99% success rate and transfers well to the real world.\",\"PeriodicalId\":128160,\"journal\":{\"name\":\"2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMLA55696.2022.00202\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA55696.2022.00202","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Imitation from Observation using RL and Graph-based Representation of Demonstrations
Teaching robots behavioral skills by leveraging examples provided by an expert, also referred to as Imitation Learning from Observation (IfO or ILO), is a promising approach for learning novel tasks without requiring a task-specific reward function to be engineered. We propose a RL-based framework to teach robots manipulation tasks given expert observation-only demonstrations. First, a representation model is trained to extract spatial and temporal features from demonstrations. Graph Neural Networks (GNNs) are used to encode spatial patterns, while LSTMs and Transformers are used to encode temporal features. Second, based on an off-the-shelf RL algorithm, the demonstrations are leveraged through the trained representation to guide the policy training towards solving the task demonstrated by the expert. We show that our approach compares favorably to state-of-the-art IfO algorithms with a 99% success rate and transfers well to the real world.