{"title":"评估由人类引导的机器学习构建的团队行为","authors":"Igor Karpov, Leif M. Johnson, R. Miikkulainen","doi":"10.1109/CIG.2015.7317946","DOIUrl":null,"url":null,"abstract":"Machine learning games such as NERO incorporate adaptive methods such as neuroevolution as an integral part of the gameplay by allowing the player to train teams of autonomous agents for effective behavior in challenging open-ended tasks. However, rigorously evaluating such human-guided machine learning methods and the resulting teams of agent policies can be challenging and is thus rarely done. This paper presents the results and analysis of a large scale online tournament between participants who evolved team agent behaviors and submitted them to be compared with others. An analysis of the teams submitted for the tournament indicates a complex, non-transitive fitness landscape, multiple successful strategies and training approaches, and performance above hand-constructed and random baselines. The tournament and analysis presented provide a practical way to study and improve human-guided machine learning methods and the resulting NPC team behaviors, potentially leading to better games and better game design tools in the future.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Evaluating team behaviors constructed with human-guided machine learning\",\"authors\":\"Igor Karpov, Leif M. Johnson, R. Miikkulainen\",\"doi\":\"10.1109/CIG.2015.7317946\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine learning games such as NERO incorporate adaptive methods such as neuroevolution as an integral part of the gameplay by allowing the player to train teams of autonomous agents for effective behavior in challenging open-ended tasks. However, rigorously evaluating such human-guided machine learning methods and the resulting teams of agent policies can be challenging and is thus rarely done. This paper presents the results and analysis of a large scale online tournament between participants who evolved team agent behaviors and submitted them to be compared with others. An analysis of the teams submitted for the tournament indicates a complex, non-transitive fitness landscape, multiple successful strategies and training approaches, and performance above hand-constructed and random baselines. The tournament and analysis presented provide a practical way to study and improve human-guided machine learning methods and the resulting NPC team behaviors, potentially leading to better games and better game design tools in the future.\",\"PeriodicalId\":244862,\"journal\":{\"name\":\"2015 IEEE Conference on Computational Intelligence and Games (CIG)\",\"volume\":\"114 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-11-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 IEEE Conference on Computational Intelligence and Games (CIG)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CIG.2015.7317946\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIG.2015.7317946","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Evaluating team behaviors constructed with human-guided machine learning
Machine learning games such as NERO incorporate adaptive methods such as neuroevolution as an integral part of the gameplay by allowing the player to train teams of autonomous agents for effective behavior in challenging open-ended tasks. However, rigorously evaluating such human-guided machine learning methods and the resulting teams of agent policies can be challenging and is thus rarely done. This paper presents the results and analysis of a large scale online tournament between participants who evolved team agent behaviors and submitted them to be compared with others. An analysis of the teams submitted for the tournament indicates a complex, non-transitive fitness landscape, multiple successful strategies and training approaches, and performance above hand-constructed and random baselines. The tournament and analysis presented provide a practical way to study and improve human-guided machine learning methods and the resulting NPC team behaviors, potentially leading to better games and better game design tools in the future.