{"title":"A strongly typed GP-based video game player","authors":"Baozhu Jia, M. Ebner","doi":"10.1109/CIG.2015.7317920","DOIUrl":null,"url":null,"abstract":"This paper attempts to evolve a general video game player, i.e. an agent which is able to learn to play many different video games with little domain knowledge. Our project uses strongly typed genetic programming as a learning algorithm. Three simple hand-crafted features are chosen to represent the game state. Each feature is a vector which consists of the position and orientation of each game object that is visible on the screen. These feature vectors are handed to the learning algorithm which will output the action the game player will take next. Game knowledge and feature vectors are acquired by processing screen grabs from the game. Three different video games are used to test the algorithm. Experiments show that our algorithm is able to find solutions to play all these three games efficiently.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIG.2015.7317920","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
This paper attempts to evolve a general video game player, i.e. an agent which is able to learn to play many different video games with little domain knowledge. Our project uses strongly typed genetic programming as a learning algorithm. Three simple hand-crafted features are chosen to represent the game state. Each feature is a vector which consists of the position and orientation of each game object that is visible on the screen. These feature vectors are handed to the learning algorithm which will output the action the game player will take next. Game knowledge and feature vectors are acquired by processing screen grabs from the game. Three different video games are used to test the algorithm. Experiments show that our algorithm is able to find solutions to play all these three games efficiently.