Tasos Papagiannis, Georgios Alexandridis, A. Stafylopatis
{"title":"基于状态相似度的一般博弈MCTS agent快速动作值估计","authors":"Tasos Papagiannis, Georgios Alexandridis, A. Stafylopatis","doi":"10.1145/3555858.3555914","DOIUrl":null,"url":null,"abstract":"As Monte Carlo Tree Search has been established as one of the most promising algorithms in the field of Game AI, several approaches have been proposed in an attempt to exploit as much information as possible during the tree search, most important of which include Rapid Action Value Estimation and its variants. These techniques estimate for each action in a node an additional value (AMAF), based on statistics of all simulations where the action was selected deeper in the search tree. In this study, a methodology for determining the most suitable node for using its AMAF scores during the selection phase is presented. Two different approaches are proposed under the scope of discovering similar nodes’ states based on the actions selected towards their paths; in the first one, N-grams are employed to detect similar paths, while in the second one a vectorized representation of the actions taken is used. The suggested algorithms are tested in the context of general game playing achieving quite satisfactory results in terms of both win rate and overall score.","PeriodicalId":290159,"journal":{"name":"Proceedings of the 17th International Conference on the Foundations of Digital Games","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"State similarity based Rapid Action Value Estimation for general game playing MCTS agents\",\"authors\":\"Tasos Papagiannis, Georgios Alexandridis, A. Stafylopatis\",\"doi\":\"10.1145/3555858.3555914\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As Monte Carlo Tree Search has been established as one of the most promising algorithms in the field of Game AI, several approaches have been proposed in an attempt to exploit as much information as possible during the tree search, most important of which include Rapid Action Value Estimation and its variants. These techniques estimate for each action in a node an additional value (AMAF), based on statistics of all simulations where the action was selected deeper in the search tree. In this study, a methodology for determining the most suitable node for using its AMAF scores during the selection phase is presented. Two different approaches are proposed under the scope of discovering similar nodes’ states based on the actions selected towards their paths; in the first one, N-grams are employed to detect similar paths, while in the second one a vectorized representation of the actions taken is used. The suggested algorithms are tested in the context of general game playing achieving quite satisfactory results in terms of both win rate and overall score.\",\"PeriodicalId\":290159,\"journal\":{\"name\":\"Proceedings of the 17th International Conference on the Foundations of Digital Games\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 17th International Conference on the Foundations of Digital Games\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3555858.3555914\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 17th International Conference on the Foundations of Digital Games","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3555858.3555914","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
State similarity based Rapid Action Value Estimation for general game playing MCTS agents
As Monte Carlo Tree Search has been established as one of the most promising algorithms in the field of Game AI, several approaches have been proposed in an attempt to exploit as much information as possible during the tree search, most important of which include Rapid Action Value Estimation and its variants. These techniques estimate for each action in a node an additional value (AMAF), based on statistics of all simulations where the action was selected deeper in the search tree. In this study, a methodology for determining the most suitable node for using its AMAF scores during the selection phase is presented. Two different approaches are proposed under the scope of discovering similar nodes’ states based on the actions selected towards their paths; in the first one, N-grams are employed to detect similar paths, while in the second one a vectorized representation of the actions taken is used. The suggested algorithms are tested in the context of general game playing achieving quite satisfactory results in terms of both win rate and overall score.