{"title":"An improved approach to reinforcement learning in Computer Go","authors":"Michael Dann, Fabio Zambetta, John Thangarajah","doi":"10.1109/CIG.2015.7317910","DOIUrl":null,"url":null,"abstract":"Monte-Carlo Tree Search (MCTS) has revolutionized, Computer Go, with programs based on the algorithm, achieving a level of play that previously seemed decades away., However, since the technique involves constructing a search tree, its performance tends to degrade in larger state spaces. Dyna-2, is a hybrid approach that attempts to overcome this shortcoming, by combining Monte-Carlo methods with state abstraction. While, not competitive with the strongest MCTS-based programs, the, Dyna-2-based program RLGO achieved the highest ever rating, by a traditional program on the 9×9 Computer Go Server. Plain, Dyna-2 uses _-greedy exploration and a flat learning rate, but we, show that the performance of the algorithm can be significantly, improved by making some relatively minor adjustments to this, configuration. Our strongest modified program achieved an Elo, rating 289 points higher than the original in head-to-head play, equivalent to an expected win rate of 84%.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"208 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIG.2015.7317910","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Monte-Carlo Tree Search (MCTS) has revolutionized, Computer Go, with programs based on the algorithm, achieving a level of play that previously seemed decades away., However, since the technique involves constructing a search tree, its performance tends to degrade in larger state spaces. Dyna-2, is a hybrid approach that attempts to overcome this shortcoming, by combining Monte-Carlo methods with state abstraction. While, not competitive with the strongest MCTS-based programs, the, Dyna-2-based program RLGO achieved the highest ever rating, by a traditional program on the 9×9 Computer Go Server. Plain, Dyna-2 uses _-greedy exploration and a flat learning rate, but we, show that the performance of the algorithm can be significantly, improved by making some relatively minor adjustments to this, configuration. Our strongest modified program achieved an Elo, rating 289 points higher than the original in head-to-head play, equivalent to an expected win rate of 84%.