{"title":"遗传规划的策略发展","authors":"Koun-Tem Sun, Yi-Chun Lin, Cheng-Yen Wu, Yueh-Min Huang","doi":"10.1109/ICNC.2007.683","DOIUrl":null,"url":null,"abstract":"In this paper, we will apply genetic programming (GP) technique to develop two strategies: the ghost (attacker) and players (survivors) in the Traffic Light Game (a popular game among children). These two strategies are competing for each other. By applying GP, each one strategy is used as an \"imaginary enemy\" to evolve (train) another strategy. Based on this co-evolution process, the final developed strategies: the ghost can effectively capture the players, and the players can also escape from the ghost, rescue partners and detour the obstacles. Part of developed strategies had achieved success beyond our wildest dreams. The results encourage us to develop more complex strategies or cooperative models such as human learning models, the cooperative models of robot, and self learning of virtual agents.","PeriodicalId":250881,"journal":{"name":"Third International Conference on Natural Computation (ICNC 2007)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Strategy Development by Genetic Programming\",\"authors\":\"Koun-Tem Sun, Yi-Chun Lin, Cheng-Yen Wu, Yueh-Min Huang\",\"doi\":\"10.1109/ICNC.2007.683\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we will apply genetic programming (GP) technique to develop two strategies: the ghost (attacker) and players (survivors) in the Traffic Light Game (a popular game among children). These two strategies are competing for each other. By applying GP, each one strategy is used as an \\\"imaginary enemy\\\" to evolve (train) another strategy. Based on this co-evolution process, the final developed strategies: the ghost can effectively capture the players, and the players can also escape from the ghost, rescue partners and detour the obstacles. Part of developed strategies had achieved success beyond our wildest dreams. The results encourage us to develop more complex strategies or cooperative models such as human learning models, the cooperative models of robot, and self learning of virtual agents.\",\"PeriodicalId\":250881,\"journal\":{\"name\":\"Third International Conference on Natural Computation (ICNC 2007)\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-08-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Third International Conference on Natural Computation (ICNC 2007)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICNC.2007.683\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Third International Conference on Natural Computation (ICNC 2007)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNC.2007.683","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
In this paper, we will apply genetic programming (GP) technique to develop two strategies: the ghost (attacker) and players (survivors) in the Traffic Light Game (a popular game among children). These two strategies are competing for each other. By applying GP, each one strategy is used as an "imaginary enemy" to evolve (train) another strategy. Based on this co-evolution process, the final developed strategies: the ghost can effectively capture the players, and the players can also escape from the ghost, rescue partners and detour the obstacles. Part of developed strategies had achieved success beyond our wildest dreams. The results encourage us to develop more complex strategies or cooperative models such as human learning models, the cooperative models of robot, and self learning of virtual agents.