{"title":"A Genetic Approach to the Formulation of Tetris Engine","authors":"Hongtao Zhang","doi":"10.1145/3577117.3577134","DOIUrl":null,"url":null,"abstract":"The game Tetris is a great and famous topic for research in artificial intelligence and machine learning. Many investigations have already existed. However, we believe more things can be learned from this topic, and there is still space to improve. This paper will tackle the Tetris game using three different agents, the handcrafted, local search and reinforcement learning agents. We will implement, compare and analyze all three agents to understand their advantages and disadvantages. In brief, the main result is that the local search agent turns out to be the most successful agent, which performs ten times better than the handcrafted agent and five times better than the reinforcement learning agent. The main result implies two take-away messages. Firstly, sometimes the simple model is the optimal model. Secondly, we should be cautious when using a Convolutional Neural Network (CNN) to encode game state because of its spatial invariance property.","PeriodicalId":309874,"journal":{"name":"Proceedings of the 6th International Conference on Advances in Image Processing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 6th International Conference on Advances in Image Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3577117.3577134","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The game Tetris is a great and famous topic for research in artificial intelligence and machine learning. Many investigations have already existed. However, we believe more things can be learned from this topic, and there is still space to improve. This paper will tackle the Tetris game using three different agents, the handcrafted, local search and reinforcement learning agents. We will implement, compare and analyze all three agents to understand their advantages and disadvantages. In brief, the main result is that the local search agent turns out to be the most successful agent, which performs ten times better than the handcrafted agent and five times better than the reinforcement learning agent. The main result implies two take-away messages. Firstly, sometimes the simple model is the optimal model. Secondly, we should be cautious when using a Convolutional Neural Network (CNN) to encode game state because of its spatial invariance property.