{"title":"蒙特卡罗树搜索的非渐近分析","authors":"D. Shah, Qiaomin Xie, Zhi Xu","doi":"10.1287/opre.2021.2239","DOIUrl":null,"url":null,"abstract":"In “Nonasymptotic Analysis of Monte Carlo Tree Search,” D. Shah, Q. Xie, and Z. Xu consider the popular tree-based search strategy, the Monte Carlo Tree Search (MCTS), in the context of the infinite-horizon discounted Markov decision process. They show that MCTS with an appropriate polynomial rather than logarithmic bonus term indeed leads to the desired convergence property. The authors derive the results by establishing a polynomial concentration property of regret for a class of nonstationary multiarm bandits. Furthermore, using this as a building block, they demonstrate that MCTS, combined with nearest neighbor supervised learning, acts as a “policy improvement” operator that can iteratively improve value function approximation.","PeriodicalId":19546,"journal":{"name":"Oper. Res.","volume":"23 1","pages":"3234-3260"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Nonasymptotic Analysis of Monte Carlo Tree Search\",\"authors\":\"D. Shah, Qiaomin Xie, Zhi Xu\",\"doi\":\"10.1287/opre.2021.2239\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In “Nonasymptotic Analysis of Monte Carlo Tree Search,” D. Shah, Q. Xie, and Z. Xu consider the popular tree-based search strategy, the Monte Carlo Tree Search (MCTS), in the context of the infinite-horizon discounted Markov decision process. They show that MCTS with an appropriate polynomial rather than logarithmic bonus term indeed leads to the desired convergence property. The authors derive the results by establishing a polynomial concentration property of regret for a class of nonstationary multiarm bandits. Furthermore, using this as a building block, they demonstrate that MCTS, combined with nearest neighbor supervised learning, acts as a “policy improvement” operator that can iteratively improve value function approximation.\",\"PeriodicalId\":19546,\"journal\":{\"name\":\"Oper. Res.\",\"volume\":\"23 1\",\"pages\":\"3234-3260\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Oper. Res.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1287/opre.2021.2239\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Oper. Res.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1287/opre.2021.2239","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
In “Nonasymptotic Analysis of Monte Carlo Tree Search,” D. Shah, Q. Xie, and Z. Xu consider the popular tree-based search strategy, the Monte Carlo Tree Search (MCTS), in the context of the infinite-horizon discounted Markov decision process. They show that MCTS with an appropriate polynomial rather than logarithmic bonus term indeed leads to the desired convergence property. The authors derive the results by establishing a polynomial concentration property of regret for a class of nonstationary multiarm bandits. Furthermore, using this as a building block, they demonstrate that MCTS, combined with nearest neighbor supervised learning, acts as a “policy improvement” operator that can iteratively improve value function approximation.