{"title":"Refining Co-operative Competition of Robocup Soccer with Reinforcement Learning","authors":"Zhengqiao Wang, Yufan Zeng, Yue Yuan, Yibo Guo","doi":"10.1109/DSC50466.2020.00049","DOIUrl":null,"url":null,"abstract":"Reinforcement learning (RL) has been widely applied in RoboCup soccer games because of its great potential in enhancing the performance for the model-free competitive scenarios. In recent years, researchers have made a lot of efforts on reducing the input size of RL in order to speed up the training process of the RoboCup soccer agents. In this work, we proposed an improved DQN algorithm named Hierarchical Movement Grouped Deep-Q-Network (HMG-DQN). The algorithm can be trained when actions are in high hierarchy of movement groups, especially in the co-operative competition scenarios, such as 2v1 and 3v2 break-throughs. We conducted the experiments on a simulation platform based on RoboCup SPL rules, and the results showed that our improved algorithm has significantly improved the winning rate compared with DQN.","PeriodicalId":423182,"journal":{"name":"2020 IEEE Fifth International Conference on Data Science in Cyberspace (DSC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Fifth International Conference on Data Science in Cyberspace (DSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSC50466.2020.00049","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Reinforcement learning (RL) has been widely applied in RoboCup soccer games because of its great potential in enhancing the performance for the model-free competitive scenarios. In recent years, researchers have made a lot of efforts on reducing the input size of RL in order to speed up the training process of the RoboCup soccer agents. In this work, we proposed an improved DQN algorithm named Hierarchical Movement Grouped Deep-Q-Network (HMG-DQN). The algorithm can be trained when actions are in high hierarchy of movement groups, especially in the co-operative competition scenarios, such as 2v1 and 3v2 break-throughs. We conducted the experiments on a simulation platform based on RoboCup SPL rules, and the results showed that our improved algorithm has significantly improved the winning rate compared with DQN.