{"title":"Q-Learning Acceleration via State-Space Partitioning","authors":"Haoran Wei, Kevin Corder, Keith S. Decker","doi":"10.1109/ICMLA.2018.00050","DOIUrl":null,"url":null,"abstract":"One of the biggest obstacles of Reinforcement Learning (RL) is its slow convergence rate in large state spaces or with sparse rewards. It has been shown that single-agent RL can be accelerated within a cooperative multi-agent scenario with information sharing, however the speedup depends on how well the agents' information can be used together. We demonstrate in this paper that state-space partitioning among agents can be realized by reward design without hard coded rules. The partitioning-associated reward directs agents to focus on different partitions and thus share information more efficiently. This approach has two advantages: (1) agents' actions are not diminished and remain relatively independent from one another; (2) it can be used to accelerate learning in both structured state domains (where partitions can be pre-determined) and arbitrarily-structured state domains (where partitions may be developed dynamically by agent teams as they explore the environment). Finally, we validate the method's efficacy by comparing it to previous related work in a simplified soccer domain.","PeriodicalId":6533,"journal":{"name":"2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"55 1","pages":"293-298"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA.2018.00050","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Q-Learning Acceleration via State-Space Partitioning
One of the biggest obstacles of Reinforcement Learning (RL) is its slow convergence rate in large state spaces or with sparse rewards. It has been shown that single-agent RL can be accelerated within a cooperative multi-agent scenario with information sharing, however the speedup depends on how well the agents' information can be used together. We demonstrate in this paper that state-space partitioning among agents can be realized by reward design without hard coded rules. The partitioning-associated reward directs agents to focus on different partitions and thus share information more efficiently. This approach has two advantages: (1) agents' actions are not diminished and remain relatively independent from one another; (2) it can be used to accelerate learning in both structured state domains (where partitions can be pre-determined) and arbitrarily-structured state domains (where partitions may be developed dynamically by agent teams as they explore the environment). Finally, we validate the method's efficacy by comparing it to previous related work in a simplified soccer domain.