Jayden Ivanovo, W. Raffe, Fabio Zambetta, Xiaodong Li
{"title":"结合了蒙特卡罗树搜索和学徒学习来获取旗子","authors":"Jayden Ivanovo, W. Raffe, Fabio Zambetta, Xiaodong Li","doi":"10.1109/CIG.2015.7317914","DOIUrl":null,"url":null,"abstract":"In this paper we introduce a novel approach to agent control in competitive video games which combines Monte Carlo Tree Search (MCTS) and Apprenticeship Learning (AL). More specifically, an opponent model created through AL is used during the expansion phase of the Upper Confidence Bounds for Trees (UCT) variant of MCTS. We show how this approach can be applied to a game of Capture the Flag (CTF), an environment which is both non-deterministic and partially observable. The performance gain of a controller utilizing an opponent model learned via AL when compared to a controller using just UCT is shown both with win/loss ratios and True Skill rankings. Additionally, we build on previous findings by providing evidence of a bias towards a particular style of play in the AI Sandbox CTF environment. We believe that the approach highlighted here can be extended to a wider range of games other than just CTF.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Combining Monte Carlo tree search and apprenticeship learning for capture the flag\",\"authors\":\"Jayden Ivanovo, W. Raffe, Fabio Zambetta, Xiaodong Li\",\"doi\":\"10.1109/CIG.2015.7317914\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper we introduce a novel approach to agent control in competitive video games which combines Monte Carlo Tree Search (MCTS) and Apprenticeship Learning (AL). More specifically, an opponent model created through AL is used during the expansion phase of the Upper Confidence Bounds for Trees (UCT) variant of MCTS. We show how this approach can be applied to a game of Capture the Flag (CTF), an environment which is both non-deterministic and partially observable. The performance gain of a controller utilizing an opponent model learned via AL when compared to a controller using just UCT is shown both with win/loss ratios and True Skill rankings. Additionally, we build on previous findings by providing evidence of a bias towards a particular style of play in the AI Sandbox CTF environment. We believe that the approach highlighted here can be extended to a wider range of games other than just CTF.\",\"PeriodicalId\":244862,\"journal\":{\"name\":\"2015 IEEE Conference on Computational Intelligence and Games (CIG)\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-11-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 IEEE Conference on Computational Intelligence and Games (CIG)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CIG.2015.7317914\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIG.2015.7317914","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Combining Monte Carlo tree search and apprenticeship learning for capture the flag
In this paper we introduce a novel approach to agent control in competitive video games which combines Monte Carlo Tree Search (MCTS) and Apprenticeship Learning (AL). More specifically, an opponent model created through AL is used during the expansion phase of the Upper Confidence Bounds for Trees (UCT) variant of MCTS. We show how this approach can be applied to a game of Capture the Flag (CTF), an environment which is both non-deterministic and partially observable. The performance gain of a controller utilizing an opponent model learned via AL when compared to a controller using just UCT is shown both with win/loss ratios and True Skill rankings. Additionally, we build on previous findings by providing evidence of a bias towards a particular style of play in the AI Sandbox CTF environment. We believe that the approach highlighted here can be extended to a wider range of games other than just CTF.