2016 IEEE Conference on Computational Intelligence and Games (CIG)最新文献

筛选
英文 中文
Intrinsically motivated reinforcement learning: A promising framework for procedural content generation 内在动机强化学习:程序内容生成的一个有前途的框架
2016 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860450
Noor Shaker
{"title":"Intrinsically motivated reinforcement learning: A promising framework for procedural content generation","authors":"Noor Shaker","doi":"10.1109/CIG.2016.7860450","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860450","url":null,"abstract":"So far, Evolutionary Algorithms (EA) have been the dominant paradigm for Procedural Content Generation (PCG). While we believe the field has achieved a remarkable success, we claim that there is a wide window for improvement. The field of machine learning has an abundance of methods that promise solutions to some aspects of PCG that are still under-researched. In this paper, we advocate the use of Intrinsically motivated reinforcement learning for content generation. A class of methods that thrive for knowledge for its own sake rather than as a step towards finding a solution. We argue that this approach promises solutions to some of the well-known problems in PCG: (1) searching for novelty and diversity can be easily incorporated as an intrinsic reward, (2) improving models of player experience and generation of adapted content can be done simultaneously through combining extrinsic and intrinsic rewards, and (3) mix-initiative design tools can incorporate more knowledge about the designer and her preferences and ultimately provide better assistance. We demonstrate our arguments and discuss the challenges that face the proposed approach.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"128 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88700978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Using opponent models to train inexperienced synthetic agents in social environments 使用对手模型在社会环境中训练没有经验的人工智能体
2016 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860409
C. Kiourt, Dimitris Kalles
{"title":"Using opponent models to train inexperienced synthetic agents in social environments","authors":"C. Kiourt, Dimitris Kalles","doi":"10.1109/CIG.2016.7860409","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860409","url":null,"abstract":"This paper investigates the learning progress of inexperienced agents in competitive game playing social environments. We aim to determine the effect of a knowledgeable opponent on a novice learner. For that purpose, we used synthetic agents whose playing behaviors were developed through diverse reinforcement learning set-ups, such as exploitation-vs-exploration trade-off, learning backup and speed of learning, as opponents, and a self-trained agent. The paper concludes by highlighting the effect of diverse knowledgeable synthetic agents in the learning trajectory of an inexperienced agent in competitive multiagent environments.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"73 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86409522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Investigating vanilla MCTS scaling on the GVG-AI game corpus 在GVG-AI游戏语料库上研究香草MCTS缩放
2016 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860443
M. Nelson
{"title":"Investigating vanilla MCTS scaling on the GVG-AI game corpus","authors":"M. Nelson","doi":"10.1109/CIG.2016.7860443","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860443","url":null,"abstract":"The General Video Game AI Competition (GVG-AI) invites submissions of controllers to play games specified in the Video Game Description Language (VGDL), testing them against each other and several baselines. One of the baselines that has done surprisingly well in some of the competitions is sampleMCTS, a straightforward implementation of Monte Carlo tree search (MCTS). Although it has done worse in other iterations of the competition, this has produced a nagging worry to us that perhaps the GVG-AI competition might be too easy, especially since performance profiling suggests that significant increases in number of MCTS iterations that can be completed in a given time limit will be possible through optimizations to the GVG-AI competition framework. To better understand the potential performance of the baseline vanilla MCTS controller, I perform scaling experiments, running it against the 62 games in the public GVG-AI corpus as the time budget is varied from about 1/30 of that in the current competition, through around 30x the current competition's budget. I find that it does not in fact master the games even given 30x the current time budget, so the challenge of the GVG-AI competition is safe (at least against this baseline). However, I do find that given enough computational budget, it manages to avoid explicitly losing on most games, despite failing to win them and ultimately losing as time expires, suggesting an asymmetry in the current GVG-AI competition's challenge: not losing is significantly easier than winning.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"186 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76859933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Modeling believable game characters 塑造可信的游戏角色
2016 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860412
Hanneke Kersjes, P. Spronck
{"title":"Modeling believable game characters","authors":"Hanneke Kersjes, P. Spronck","doi":"10.1109/CIG.2016.7860412","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860412","url":null,"abstract":"The behavior of virtual characters in computer games is usually determined solely by decision trees or finite state machines, which is detrimental to the characters' believability. It has been argued that enhancing the virtual characters with emotions, personalities, and moods, may make their behavior more diverse and thus more believable. Most research in this direction is based on existing (socio-)psychological literature, but not tested in a suitable experimental setting where humans interact with the virtual characters. In our research, we use a simplified version of the personality model of Ochs et al. [1], which we test in a game which has human participants interact with three agents with different personalities: an extraverted agent, a neurotic agent, and a neutral agent. The model only influences the agents' emotions, which are only exhibited by their facial expressions. The participants were asked to assess the agents' personality based on six possible traits. We found that the participants considered the neurotic agent as the most neurotic, while there are also indications that the extraverted agent was considered the most extraverted. We conclude that players will indeed distinguish personality differences between agents based on their facial expression of emotions. Therefore, using a personality model may make it easy for game developers to quickly create a high variety of virtual characters, who exhibit individual behaviors, making them more believable.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"159 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77208657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Heterogeneous team deep q-learning in low-dimensional multi-agent environments 低维多智能体环境下的异构团队深度q学习
2016 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860413
Mateusz Kurek, Wojciech Jaśkowski
{"title":"Heterogeneous team deep q-learning in low-dimensional multi-agent environments","authors":"Mateusz Kurek, Wojciech Jaśkowski","doi":"10.1109/CIG.2016.7860413","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860413","url":null,"abstract":"Deep Q-Learning is an effective reinforcement learning method, which has recently obtained human-level performance for a set of Atari 2600 games. Remarkably, the system was trained on the high-dimensional raw visual data. Is Deep Q-Learning equally valid for problems involving a low-dimensional state space? To answer this question, we evaluate the components of Deep Q-Learning (deep architecture, experience replay, target network freezing, and meta-state) on a Keepaway soccer problem, where the state is described only by 13 variables. The results indicate that although experience replay indeed improves the agent performance, target network freezing and meta-state slow down the learning process. Moreover, the deep architecture does not help for this task since a rather shallow network with just two hidden layers worked the best. By selecting the best settings, and employing heterogeneous team learning, we were able to outperform all previous methods applied to Keepaway soccer using a fraction of the runner-up's computational expense. These results extend our understanding of the Deep Q-Learning effectiveness for low-dimensional reinforcement learning tasks.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"74 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85188453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Semi-automated level design via auto-playtesting for handheld casual game creation 基于自动测试的半自动关卡设计,适用于手持休闲游戏制作
2016 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860438
E. Powley, S. Colton, Swen E. Gaudl, Rob Saunders, M. Nelson
{"title":"Semi-automated level design via auto-playtesting for handheld casual game creation","authors":"E. Powley, S. Colton, Swen E. Gaudl, Rob Saunders, M. Nelson","doi":"10.1109/CIG.2016.7860438","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860438","url":null,"abstract":"We provide a proof of principle that novel and engaging mobile casual games with new aesthetics, game mechanics and player interactions can be designed and tested directly on the device for which they are intended. We describe the Gamika iOS application which includes generative art assets; a design interface enabling the making of physics-based casual games containing multiple levels with aspects ranging from Frogger-like to Asteroids-like and beyond; a configurable automated playtester which can give feedback on the playability of levels; and an automated fine-tuning engine which searches for level parameterisations that enable the game to pass a battery of tests, as evaluated by the auto-playtester. Each aspect of the implementation represents a baseline with much room for improvement, and we present some experimental results and describe how these will guide the future directions for Gamika.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"101 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83152619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
MCTS/EA hybrid GVGAI players and game difficulty estimation MCTS/EA混合GVGAI玩家和游戏难度估算
2016 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860384
Hendrik Horn, Vanessa Volz, Diego Perez Liebana, M. Preuss
{"title":"MCTS/EA hybrid GVGAI players and game difficulty estimation","authors":"Hendrik Horn, Vanessa Volz, Diego Perez Liebana, M. Preuss","doi":"10.1109/CIG.2016.7860384","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860384","url":null,"abstract":"In the General Video Game Playing competitions of the last years, Monte-Carlo tree search as well as Evolutionary Algorithm based controllers have been successful. However, both approaches have certain weaknesses, suggesting that certain hybrids could outperform both. We envision and experimentally compare several types of hybrids of two basic approaches, as well as some possible extensions. In order to achieve a better understanding of the games in the competition and the strength and weaknesses of different controllers, we also propose and apply a novel game difficulty estimation scheme based on several observable game characteristics.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"255 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89183411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Artefacts: Minecraft meets collaborative interactive evolution 人工制品:《我的世界》符合协作互动进化
2016 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860434
Cristinel Patrascu, S. Risi
{"title":"Artefacts: Minecraft meets collaborative interactive evolution","authors":"Cristinel Patrascu, S. Risi","doi":"10.1109/CIG.2016.7860434","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860434","url":null,"abstract":"Procedural content generation has shown promise in a variety of different games. In this paper we introduce a new kind of game, called Artefacts, that combines a sandbox-like environment akin to Minecraft with the ability to interactively evolve unique three-dimensional building blocks. Artefacts does not only allow players to collaborate by building larger structures from evolved objects but also to continue evolution of others' artefacts. Results from playtests on three different game iterations indicate that players generally enjoy playing the game and are able to discover a wide variety of different 3D objects. Morever, while there is no explicit goal in Artefacts, the sandbox environment together with the ability to evolve unique shapes does allow for some interesting gameplay to emerge.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"15 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81970931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Intrinsically motivated general companion NPCs via Coupled Empowerment Maximisation 内在动机一般同伴npc通过耦合授权最大化
2016 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860406
C. Guckelsberger, Christoph Salge, S. Colton
{"title":"Intrinsically motivated general companion NPCs via Coupled Empowerment Maximisation","authors":"C. Guckelsberger, Christoph Salge, S. Colton","doi":"10.1109/CIG.2016.7860406","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860406","url":null,"abstract":"Non-player characters (NPCs) in games are traditionally hard-coded or dependent on pre-specified goals, and consequently struggle to behave sensibly in ever-changing and possibly unpredictable game worlds. To make them fit for new developments in procedural content generation, we introduce the principle of Coupled Empowerment Maximisation as an intrinsic motivation for game NPCs. We focus on the development of a general game companion, designed to support the player in achieving their goals. We evaluate our approach against three intuitive and abstract companion duties. We develop dedicated scenarios for each duty in a dungeon-crawler game testbed, and provide qualitative evidence that the emergent NPC behaviour fulfils these duties. We argue that this generic approach can speed up NPC AI development, improve automatic game evolution and introduce NPCs to full game-generation systems.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"75 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76011561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Evolving micro for 3D Real-Time Strategy games 3D即时策略游戏的进化微系统
2016 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860437
T. DeWitt, S. Louis, Siming Liu
{"title":"Evolving micro for 3D Real-Time Strategy games","authors":"T. DeWitt, S. Louis, Siming Liu","doi":"10.1109/CIG.2016.7860437","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860437","url":null,"abstract":"This paper extends prior work in generating two dimensional micro for Real-Time Strategy games to three dimensions. We extend our influence map and potential fields representation to three dimensions and compare two hill-climbers with a genetic algorithm on the problem of generating high performance influence map, potential field, and reactive control parameters that control the behavior of units in an open source Real-Time Strategy game. Results indicate that genetic algorithms evolve better behaviors for ranged units that inflict damage on enemies while kiting to avoid damage. Additionally, genetic algorithms evolve better behaviors for melee units that concentrate firepower on selective enemies to decrease the opposing army's effectiveness. Evolved behaviors, particularly for ranged units, generalize well to new scenarios. Our work thus provides evidence for the viability of an influence map and potential fields based representation for reactive control algorithms in games, 3D simulations, and aerial vehicle swarms.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"14 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73991380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信