Shicheng Wang, Zheng-xi Song, Hao Ding, Hao-bin Shi
{"title":"一种基于BP神经网络的机器人足球强化q学习方法","authors":"Shicheng Wang, Zheng-xi Song, Hao Ding, Hao-bin Shi","doi":"10.1109/ISCID.2011.53","DOIUrl":null,"url":null,"abstract":"In traditional reinforcement Q-Learning method, there exists two problems: difficulty of dividing the state information, complexity of extreme large dimension input. To solve these two problems, this paper proposed an improved reinforcement Q-Learning method with BP neutral network. In this method, the large Q table is replaced by a BP neural network. Continuous environmental information is the input. The Q value is the output. The Q value and weight of the network are also adjusted by the action rewards. This paper presents an algorithm for single agent's action selection. Simulation shows proposed method is more stable and applicable for the agent's strategy selection.","PeriodicalId":224504,"journal":{"name":"2011 Fourth International Symposium on Computational Intelligence and Design","volume":"119 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"An Improved Reinforcement Q-Learning Method with BP Neural Networks in Robot Soccer\",\"authors\":\"Shicheng Wang, Zheng-xi Song, Hao Ding, Hao-bin Shi\",\"doi\":\"10.1109/ISCID.2011.53\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In traditional reinforcement Q-Learning method, there exists two problems: difficulty of dividing the state information, complexity of extreme large dimension input. To solve these two problems, this paper proposed an improved reinforcement Q-Learning method with BP neutral network. In this method, the large Q table is replaced by a BP neural network. Continuous environmental information is the input. The Q value is the output. The Q value and weight of the network are also adjusted by the action rewards. This paper presents an algorithm for single agent's action selection. Simulation shows proposed method is more stable and applicable for the agent's strategy selection.\",\"PeriodicalId\":224504,\"journal\":{\"name\":\"2011 Fourth International Symposium on Computational Intelligence and Design\",\"volume\":\"119 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-10-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 Fourth International Symposium on Computational Intelligence and Design\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCID.2011.53\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 Fourth International Symposium on Computational Intelligence and Design","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCID.2011.53","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Improved Reinforcement Q-Learning Method with BP Neural Networks in Robot Soccer
In traditional reinforcement Q-Learning method, there exists two problems: difficulty of dividing the state information, complexity of extreme large dimension input. To solve these two problems, this paper proposed an improved reinforcement Q-Learning method with BP neutral network. In this method, the large Q table is replaced by a BP neural network. Continuous environmental information is the input. The Q value is the output. The Q value and weight of the network are also adjusted by the action rewards. This paper presents an algorithm for single agent's action selection. Simulation shows proposed method is more stable and applicable for the agent's strategy selection.