J. Reisinger, E. Bahçeci, Igor Karpov, R. Miikkulainen
{"title":"一般博弈的共同进化策略","authors":"J. Reisinger, E. Bahçeci, Igor Karpov, R. Miikkulainen","doi":"10.1109/CIG.2007.368115","DOIUrl":null,"url":null,"abstract":"The General Game Playing Competition (Genesereth et al., 2005) poses a unique challenge for artificial intelligence. To be successful, a player must learn to play well in a limited number of example games encoded in first-order logic and then generalize its game play to previously unseen games with entirely different rules. Because good opponents are usually not available, learning algorithms must come up with plausible opponent strategies in order to benchmark performance. One approach to simultaneously learning all player strategies is coevolution. This paper presents a coevolutionary approach using neuroevolution of augmenting topologies to evolve populations of game state evaluators. This approach is tested on a sample of games from the General Game Playing Competition and shown to be effective: It allows the algorithm designer to minimize the amount of domain knowledge built into the system, which leads to more general game play and allows modeling opponent strategies efficiently. Furthermore, the general game playing domain proves to be a powerful tool for developing and testing coevolutionary methods","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"218 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"39","resultStr":"{\"title\":\"Coevolving Strategies for General Game Playing\",\"authors\":\"J. Reisinger, E. Bahçeci, Igor Karpov, R. Miikkulainen\",\"doi\":\"10.1109/CIG.2007.368115\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The General Game Playing Competition (Genesereth et al., 2005) poses a unique challenge for artificial intelligence. To be successful, a player must learn to play well in a limited number of example games encoded in first-order logic and then generalize its game play to previously unseen games with entirely different rules. Because good opponents are usually not available, learning algorithms must come up with plausible opponent strategies in order to benchmark performance. One approach to simultaneously learning all player strategies is coevolution. This paper presents a coevolutionary approach using neuroevolution of augmenting topologies to evolve populations of game state evaluators. This approach is tested on a sample of games from the General Game Playing Competition and shown to be effective: It allows the algorithm designer to minimize the amount of domain knowledge built into the system, which leads to more general game play and allows modeling opponent strategies efficiently. Furthermore, the general game playing domain proves to be a powerful tool for developing and testing coevolutionary methods\",\"PeriodicalId\":365269,\"journal\":{\"name\":\"2007 IEEE Symposium on Computational Intelligence and Games\",\"volume\":\"218 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"39\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2007 IEEE Symposium on Computational Intelligence and Games\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CIG.2007.368115\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 IEEE Symposium on Computational Intelligence and Games","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIG.2007.368115","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 39
摘要
通用游戏竞赛(Genesereth et al., 2005)对人工智能提出了一个独特的挑战。要想取得成功,玩家必须学会在有限数量的一阶逻辑游戏中玩得很好,然后将其游戏玩法推广到以前从未见过的、规则完全不同的游戏中。因为通常没有好的对手,所以学习算法必须提出合理的对手策略,以便对性能进行基准测试。同时学习所有玩家策略的一种方法是共同进化。本文提出了一种利用增强拓扑的神经进化来进化博弈状态评估者群体的协同进化方法。这种方法在通用游戏竞赛的游戏样本上进行了测试,并证明是有效的:它允许算法设计者最小化系统中构建的领域知识的数量,从而导致更通用的游戏玩法,并允许有效地建模对手策略。此外,一般博弈域被证明是开发和测试共同进化方法的强大工具
The General Game Playing Competition (Genesereth et al., 2005) poses a unique challenge for artificial intelligence. To be successful, a player must learn to play well in a limited number of example games encoded in first-order logic and then generalize its game play to previously unseen games with entirely different rules. Because good opponents are usually not available, learning algorithms must come up with plausible opponent strategies in order to benchmark performance. One approach to simultaneously learning all player strategies is coevolution. This paper presents a coevolutionary approach using neuroevolution of augmenting topologies to evolve populations of game state evaluators. This approach is tested on a sample of games from the General Game Playing Competition and shown to be effective: It allows the algorithm designer to minimize the amount of domain knowledge built into the system, which leads to more general game play and allows modeling opponent strategies efficiently. Furthermore, the general game playing domain proves to be a powerful tool for developing and testing coevolutionary methods