{"title":"Game State Evaluation Heuristics in General Video Game Playing","authors":"Bruno Santos, H. Bernardino","doi":"10.1109/SBGAMES.2018.00026","DOIUrl":null,"url":null,"abstract":"In General Game Playing (GGP), artificial intelligence methods play a diverse set of games. The General Video Game AI Competition (GVGAI) is one of the most famous GGP competitions, where controllers measure their performance in games inspired by the Atari 2600 console. Here, the GVGAI framework is used. In games where the controller can perform simulations to develop its game plan, recognizing the chance of victory/defeat of the possible resulting states is an essential feature for decision making. In GVGAI, the creation of appropriate evaluation criteria is a challenge as the algorithm has no previous information regarding the game, such as win conditions and score rewards. We propose here the use of (i) avatar-related information provided by the game, (ii) spacial exploration encouraging and (iii) knowledge obtained during gameplay in order to enhance the evaluation of game states. Also, a penalization approach is adopted. A study is presented where these techniques are combined with two GVGAI algorithms, namely, Rolling Horizon Evolutionary Algorithm (RHEA) and Monte Carlo Tree Search (MCTS). Computational experiments are performed using 20 deterministic and stochastic games, and the results obtained by the proposed methods are compared to those found by their baseline techniques and other methods from the literature. We observed that the proposed techniques (i) presented a larger number of wins and F1-Scores than those found by their original versions and (ii) obtained competitive solutions when compared to those found by methods from the literature.","PeriodicalId":170922,"journal":{"name":"2018 17th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 17th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SBGAMES.2018.00026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In General Game Playing (GGP), artificial intelligence methods play a diverse set of games. The General Video Game AI Competition (GVGAI) is one of the most famous GGP competitions, where controllers measure their performance in games inspired by the Atari 2600 console. Here, the GVGAI framework is used. In games where the controller can perform simulations to develop its game plan, recognizing the chance of victory/defeat of the possible resulting states is an essential feature for decision making. In GVGAI, the creation of appropriate evaluation criteria is a challenge as the algorithm has no previous information regarding the game, such as win conditions and score rewards. We propose here the use of (i) avatar-related information provided by the game, (ii) spacial exploration encouraging and (iii) knowledge obtained during gameplay in order to enhance the evaluation of game states. Also, a penalization approach is adopted. A study is presented where these techniques are combined with two GVGAI algorithms, namely, Rolling Horizon Evolutionary Algorithm (RHEA) and Monte Carlo Tree Search (MCTS). Computational experiments are performed using 20 deterministic and stochastic games, and the results obtained by the proposed methods are compared to those found by their baseline techniques and other methods from the literature. We observed that the proposed techniques (i) presented a larger number of wins and F1-Scores than those found by their original versions and (ii) obtained competitive solutions when compared to those found by methods from the literature.