J. P. Tomas, Nathanael Jhonn R. Aguas, Angela N. De Villa, Jasmin Rose G. Lim
{"title":"基于监督强化学习和蒙特卡洛树搜索的自适应人工智能智能体的开发","authors":"J. P. Tomas, Nathanael Jhonn R. Aguas, Angela N. De Villa, Jasmin Rose G. Lim","doi":"10.1145/3507623.3507629","DOIUrl":null,"url":null,"abstract":"Reinforcement Learning (RL) and Monte Carlo Tree Search (MCTS) are efficient algorithms for video game artificial intelligence (AI) agents, while Supervised Learning (SL) would make a video game AI agent visual-based. Combining SL and RL with MCTS has been tested in Computer Go, but this has yet to be thoroughly explored for fighting games. FightingICE, a 2D fighting game, serves as an ideal testing environment because of its complex action and observation spaces. In this paper, we use a Convolutional Neural Network (CNN) and Deep Q-Learning with MCTS (DQCN with MCTS) to create three models for FightingICE and compare their performance with an MCTS agent when playing against the same set of human testers. Our best performing model achieved a 58.57% game win-rate in 70 testing games after 7 training games. Although the model did not beat the MCTS agent's performance, it demonstrates the potential of combining SL, RL, and MCTS to develop an AI agent for fighting games.","PeriodicalId":159712,"journal":{"name":"International Conference on Computational Intelligence and Intelligent Systems","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Developing an Adaptive AI Agent using Supervised and Reinforcement Learning with Monte Carlo Tree Search in FightingICE\",\"authors\":\"J. P. Tomas, Nathanael Jhonn R. Aguas, Angela N. De Villa, Jasmin Rose G. Lim\",\"doi\":\"10.1145/3507623.3507629\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Reinforcement Learning (RL) and Monte Carlo Tree Search (MCTS) are efficient algorithms for video game artificial intelligence (AI) agents, while Supervised Learning (SL) would make a video game AI agent visual-based. Combining SL and RL with MCTS has been tested in Computer Go, but this has yet to be thoroughly explored for fighting games. FightingICE, a 2D fighting game, serves as an ideal testing environment because of its complex action and observation spaces. In this paper, we use a Convolutional Neural Network (CNN) and Deep Q-Learning with MCTS (DQCN with MCTS) to create three models for FightingICE and compare their performance with an MCTS agent when playing against the same set of human testers. Our best performing model achieved a 58.57% game win-rate in 70 testing games after 7 training games. Although the model did not beat the MCTS agent's performance, it demonstrates the potential of combining SL, RL, and MCTS to develop an AI agent for fighting games.\",\"PeriodicalId\":159712,\"journal\":{\"name\":\"International Conference on Computational Intelligence and Intelligent Systems\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Computational Intelligence and Intelligent Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3507623.3507629\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Computational Intelligence and Intelligent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3507623.3507629","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Developing an Adaptive AI Agent using Supervised and Reinforcement Learning with Monte Carlo Tree Search in FightingICE
Reinforcement Learning (RL) and Monte Carlo Tree Search (MCTS) are efficient algorithms for video game artificial intelligence (AI) agents, while Supervised Learning (SL) would make a video game AI agent visual-based. Combining SL and RL with MCTS has been tested in Computer Go, but this has yet to be thoroughly explored for fighting games. FightingICE, a 2D fighting game, serves as an ideal testing environment because of its complex action and observation spaces. In this paper, we use a Convolutional Neural Network (CNN) and Deep Q-Learning with MCTS (DQCN with MCTS) to create three models for FightingICE and compare their performance with an MCTS agent when playing against the same set of human testers. Our best performing model achieved a 58.57% game win-rate in 70 testing games after 7 training games. Although the model did not beat the MCTS agent's performance, it demonstrates the potential of combining SL, RL, and MCTS to develop an AI agent for fighting games.