为动作平台电子游戏框架发展通用策略

Karine da Silva Miras de Araújo, F. O. França
{"title":"为动作平台电子游戏框架发展通用策略","authors":"Karine da Silva Miras de Araújo, F. O. França","doi":"10.1109/CEC.2016.7743938","DOIUrl":null,"url":null,"abstract":"Computational Intelligence in Games comprises many challenges such as the procedural level generation, evolving adversary difficulty and the learning of autonomous playing agents. This last challenge has the objective of creating an autonomous playing agent capable of winning against an opponent on an specific game. Whereas a human being can learn a general winning strategy (i.e., avoid the obstacles and defeat the enemies), learning algorithms have a tendency to overspecialize for a given training scenario (i.e., perform an exact sequence of actions to win), not being able to face variations of the original scenario. To further study this problem, we have applied three variations of Neuroevolution algorithms to the EvoMan game playing learning framework with the main objective of developing an autonomous agent capable of playing in different scenarios than those observed during the training stages. This framework is based on the bosses fights of the well known game called Mega Man. The experiments show that the evolved agents are not capable of winning every challenge imposed to them but they are still capable of learning a generalized behavior.","PeriodicalId":6344,"journal":{"name":"2009 IEEE Congress on Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Evolving a generalized strategy for an action-platformer video game framework\",\"authors\":\"Karine da Silva Miras de Araújo, F. O. França\",\"doi\":\"10.1109/CEC.2016.7743938\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Computational Intelligence in Games comprises many challenges such as the procedural level generation, evolving adversary difficulty and the learning of autonomous playing agents. This last challenge has the objective of creating an autonomous playing agent capable of winning against an opponent on an specific game. Whereas a human being can learn a general winning strategy (i.e., avoid the obstacles and defeat the enemies), learning algorithms have a tendency to overspecialize for a given training scenario (i.e., perform an exact sequence of actions to win), not being able to face variations of the original scenario. To further study this problem, we have applied three variations of Neuroevolution algorithms to the EvoMan game playing learning framework with the main objective of developing an autonomous agent capable of playing in different scenarios than those observed during the training stages. This framework is based on the bosses fights of the well known game called Mega Man. The experiments show that the evolved agents are not capable of winning every challenge imposed to them but they are still capable of learning a generalized behavior.\",\"PeriodicalId\":6344,\"journal\":{\"name\":\"2009 IEEE Congress on Evolutionary Computation\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2009 IEEE Congress on Evolutionary Computation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CEC.2016.7743938\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 IEEE Congress on Evolutionary Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CEC.2016.7743938","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

游戏中的计算智能包含许多挑战,如程序关卡生成、不断进化的对手难度和自主游戏代理的学习。最后一个挑战的目标是创造一个能够在特定游戏中战胜对手的自主游戏代理。虽然人类可以学习一般的获胜策略(例如,避开障碍并击败敌人),但学习算法倾向于过度专注于给定的训练场景(例如,执行精确的行动序列以获胜),而无法面对原始场景的变化。为了进一步研究这个问题,我们将三种不同的神经进化算法应用于EvoMan游戏学习框架,其主要目标是开发一个能够在不同场景下进行游戏的自主智能体,而不是在训练阶段观察到的场景。这个框架是基于著名游戏《洛克人》中的boss战斗。实验表明,进化后的智能体并不能赢得强加给它们的每一个挑战,但它们仍然能够学习一种广义的行为。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Evolving a generalized strategy for an action-platformer video game framework
Computational Intelligence in Games comprises many challenges such as the procedural level generation, evolving adversary difficulty and the learning of autonomous playing agents. This last challenge has the objective of creating an autonomous playing agent capable of winning against an opponent on an specific game. Whereas a human being can learn a general winning strategy (i.e., avoid the obstacles and defeat the enemies), learning algorithms have a tendency to overspecialize for a given training scenario (i.e., perform an exact sequence of actions to win), not being able to face variations of the original scenario. To further study this problem, we have applied three variations of Neuroevolution algorithms to the EvoMan game playing learning framework with the main objective of developing an autonomous agent capable of playing in different scenarios than those observed during the training stages. This framework is based on the bosses fights of the well known game called Mega Man. The experiments show that the evolved agents are not capable of winning every challenge imposed to them but they are still capable of learning a generalized behavior.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信