{"title":"学习球在杯子里玩机器人","authors":"B. Nemec, Martin Zorko, L. Žlajpah","doi":"10.1109/RAAD.2010.5524570","DOIUrl":null,"url":null,"abstract":"In the paper we evaluate two learning methods applied to the ball-in-a-cup game. The first approach is based on imitation learning. The captured trajectory was encoded with Dynamic motion primitives (DMP). The DMP approach allows simple adaptation of the demonstrated trajectory to the robot dynamics. In the second approach, we use reinforcement learning, which allows learning without any previous knowledge of the system or the environment. In contrast to the majority of the previous attempts, we used SASRA learning algorithm. Experimental results for both cases were performed on Mitsubishi PA10 robot arm.","PeriodicalId":104308,"journal":{"name":"19th International Workshop on Robotics in Alpe-Adria-Danube Region (RAAD 2010)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":"{\"title\":\"Learning of a ball-in-a-cup playing robot\",\"authors\":\"B. Nemec, Martin Zorko, L. Žlajpah\",\"doi\":\"10.1109/RAAD.2010.5524570\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the paper we evaluate two learning methods applied to the ball-in-a-cup game. The first approach is based on imitation learning. The captured trajectory was encoded with Dynamic motion primitives (DMP). The DMP approach allows simple adaptation of the demonstrated trajectory to the robot dynamics. In the second approach, we use reinforcement learning, which allows learning without any previous knowledge of the system or the environment. In contrast to the majority of the previous attempts, we used SASRA learning algorithm. Experimental results for both cases were performed on Mitsubishi PA10 robot arm.\",\"PeriodicalId\":104308,\"journal\":{\"name\":\"19th International Workshop on Robotics in Alpe-Adria-Danube Region (RAAD 2010)\",\"volume\":\"21 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-06-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"17\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"19th International Workshop on Robotics in Alpe-Adria-Danube Region (RAAD 2010)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/RAAD.2010.5524570\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"19th International Workshop on Robotics in Alpe-Adria-Danube Region (RAAD 2010)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RAAD.2010.5524570","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
In the paper we evaluate two learning methods applied to the ball-in-a-cup game. The first approach is based on imitation learning. The captured trajectory was encoded with Dynamic motion primitives (DMP). The DMP approach allows simple adaptation of the demonstrated trajectory to the robot dynamics. In the second approach, we use reinforcement learning, which allows learning without any previous knowledge of the system or the environment. In contrast to the majority of the previous attempts, we used SASRA learning algorithm. Experimental results for both cases were performed on Mitsubishi PA10 robot arm.