{"title":"神经围棋棋手的多目标进化","authors":"Kar Bin Tan, J. Teo, P. Anthony","doi":"10.1109/DIGITEL.2010.19","DOIUrl":null,"url":null,"abstract":"Solving multi-objective optimization problems (MOPs) using evolutionary algorithms (EAs) has been gaining a lot of interest recently. Go is a hard and complex board game. Using EAs, a computer may learn to play the game of Go by playing the games repeatedly and gaining the experience from these repeated plays. In this project, artificial neural networks (ANNs) are evolved with the Pareto Archived Evolution Strategies (PAES) for the computer player to automatically learn and optimally play the small board Go game. ANNs will be automatically evolved with the least amount of complexity (number of hidden units) to optimally play the Go game. The complexity of ANN is of particular importance since it will influence the generalization capability of the evolved network. Hence, there are two conflicting objectives in this study; first is maximizing the Go game fitness score and the second is reducing the complexity in the ANN. Several comparative empirical experiments were conducted that showed that the multi-objective optimization with two distinct and conflicting fitness functions outperformed the single-objective optimization which only optimized the first objective with no selection pressure selection on the second objective.","PeriodicalId":430843,"journal":{"name":"2010 Third IEEE International Conference on Digital Game and Intelligent Toy Enhanced Learning","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Multi-objective Evolution of Neural Go Players\",\"authors\":\"Kar Bin Tan, J. Teo, P. Anthony\",\"doi\":\"10.1109/DIGITEL.2010.19\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Solving multi-objective optimization problems (MOPs) using evolutionary algorithms (EAs) has been gaining a lot of interest recently. Go is a hard and complex board game. Using EAs, a computer may learn to play the game of Go by playing the games repeatedly and gaining the experience from these repeated plays. In this project, artificial neural networks (ANNs) are evolved with the Pareto Archived Evolution Strategies (PAES) for the computer player to automatically learn and optimally play the small board Go game. ANNs will be automatically evolved with the least amount of complexity (number of hidden units) to optimally play the Go game. The complexity of ANN is of particular importance since it will influence the generalization capability of the evolved network. Hence, there are two conflicting objectives in this study; first is maximizing the Go game fitness score and the second is reducing the complexity in the ANN. Several comparative empirical experiments were conducted that showed that the multi-objective optimization with two distinct and conflicting fitness functions outperformed the single-objective optimization which only optimized the first objective with no selection pressure selection on the second objective.\",\"PeriodicalId\":430843,\"journal\":{\"name\":\"2010 Third IEEE International Conference on Digital Game and Intelligent Toy Enhanced Learning\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-04-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2010 Third IEEE International Conference on Digital Game and Intelligent Toy Enhanced Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DIGITEL.2010.19\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 Third IEEE International Conference on Digital Game and Intelligent Toy Enhanced Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DIGITEL.2010.19","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Solving multi-objective optimization problems (MOPs) using evolutionary algorithms (EAs) has been gaining a lot of interest recently. Go is a hard and complex board game. Using EAs, a computer may learn to play the game of Go by playing the games repeatedly and gaining the experience from these repeated plays. In this project, artificial neural networks (ANNs) are evolved with the Pareto Archived Evolution Strategies (PAES) for the computer player to automatically learn and optimally play the small board Go game. ANNs will be automatically evolved with the least amount of complexity (number of hidden units) to optimally play the Go game. The complexity of ANN is of particular importance since it will influence the generalization capability of the evolved network. Hence, there are two conflicting objectives in this study; first is maximizing the Go game fitness score and the second is reducing the complexity in the ANN. Several comparative empirical experiments were conducted that showed that the multi-objective optimization with two distinct and conflicting fitness functions outperformed the single-objective optimization which only optimized the first objective with no selection pressure selection on the second objective.