Milan Jelisavcic, Matteo De Carlo, E. Haasdijk, A. Eiben
{"title":"改进模块化机器人步态在线进化的强化学习功率","authors":"Milan Jelisavcic, Matteo De Carlo, E. Haasdijk, A. Eiben","doi":"10.1109/SSCI.2016.7850166","DOIUrl":null,"url":null,"abstract":"This paper addresses the problem of on-line gait learning in modular robots whose shape is not known in advance. The best algorithm for this problem known to us is a reinforcement learning method, called RL PoWER. In this study we revisit the original RL PoWER algorithm and observe that in essence it is a specific evolutionary algorithm. Based on this insight we propose two modifications of the main search operators and compare the quality of the evolved gaits when either or both of these modified operators are employed. The results show that using 2-parent crossover as well as mutation with self-adaptive step-sizes can significantly improve the performance of the original algorithm.","PeriodicalId":120288,"journal":{"name":"2016 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2016-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Improving RL power for on-line evolution of gaits in modular robots\",\"authors\":\"Milan Jelisavcic, Matteo De Carlo, E. Haasdijk, A. Eiben\",\"doi\":\"10.1109/SSCI.2016.7850166\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper addresses the problem of on-line gait learning in modular robots whose shape is not known in advance. The best algorithm for this problem known to us is a reinforcement learning method, called RL PoWER. In this study we revisit the original RL PoWER algorithm and observe that in essence it is a specific evolutionary algorithm. Based on this insight we propose two modifications of the main search operators and compare the quality of the evolved gaits when either or both of these modified operators are employed. The results show that using 2-parent crossover as well as mutation with self-adaptive step-sizes can significantly improve the performance of the original algorithm.\",\"PeriodicalId\":120288,\"journal\":{\"name\":\"2016 IEEE Symposium Series on Computational Intelligence (SSCI)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-12-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE Symposium Series on Computational Intelligence (SSCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SSCI.2016.7850166\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE Symposium Series on Computational Intelligence (SSCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSCI.2016.7850166","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Improving RL power for on-line evolution of gaits in modular robots
This paper addresses the problem of on-line gait learning in modular robots whose shape is not known in advance. The best algorithm for this problem known to us is a reinforcement learning method, called RL PoWER. In this study we revisit the original RL PoWER algorithm and observe that in essence it is a specific evolutionary algorithm. Based on this insight we propose two modifications of the main search operators and compare the quality of the evolved gaits when either or both of these modified operators are employed. The results show that using 2-parent crossover as well as mutation with self-adaptive step-sizes can significantly improve the performance of the original algorithm.