2015 IEEE Conference on Computational Intelligence and Games (CIG)最新文献

筛选
英文 中文
Player-adaptive Spelunky level generation 玩家自适应《洞穴探险》关卡生成
2015 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2015-11-05 DOI: 10.1109/CIG.2015.7317948
David Stammer, Tobias Günther, M. Preuss
{"title":"Player-adaptive Spelunky level generation","authors":"David Stammer, Tobias Günther, M. Preuss","doi":"10.1109/CIG.2015.7317948","DOIUrl":"https://doi.org/10.1109/CIG.2015.7317948","url":null,"abstract":"Procedural Content Generation (PCG) is nowadays widely applied to many different aspects of computer games. However, it can do more than to assist level designers during game creation. It can generate personalized levels according to the tastes and abilities of players online. This has already been demonstrated for (largely 1D) scrolling games and we show in this work how personalized, difficulty-adjusted levels can be generated for the more complex 2D platformer Spelunky. As direct and indirect player feedback is taken into account, the method may be filed under the Experience-Driven PCG approach. Our approach is based on a rather generic rule set that may also be transferred to similar games. We also present a user study showing that most users appreciate the online adaptation but are especially critical about making the game easier to play at any time.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126163191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Examination of representational expression in maze generation algorithms 迷宫生成算法中代表性表达的检验
2015 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2015-11-05 DOI: 10.1109/CIG.2015.7317902
A. Kozlova, J. A. Brown, Elizabeth Reading
{"title":"Examination of representational expression in maze generation algorithms","authors":"A. Kozlova, J. A. Brown, Elizabeth Reading","doi":"10.1109/CIG.2015.7317902","DOIUrl":"https://doi.org/10.1109/CIG.2015.7317902","url":null,"abstract":"Procedural content generation is widely used in game development, although it does not give an opportunity to generate the whole game level. However, for some game artifacts procedural generation is the preferred way of creation. Mazes are a good example of such artifacts: manual generation of mazes does not provide a variety of combinations and makes games too predictable. This article gives an overview of three basic two-dimensional maze generation algorithms: Depth-first search, Prim's, and Recursive Definition. These algorithms describe three conceptually different approaches to maze generation. The path lengths of the algorithms are compared, and advantages and disadvantages of each algorithm are given.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133579703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Optimization of Angry Birds AI controllers with distributed computing 基于分布式计算的愤怒的小鸟AI控制器优化
2015 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2015-11-05 DOI: 10.1109/CIG.2015.7317894
Du-Mim Yoon, Joo-Seon Lee, Hyun-Su Seon, Jeong-Hyeon Kim, Kyung-Joong Kim
{"title":"Optimization of Angry Birds AI controllers with distributed computing","authors":"Du-Mim Yoon, Joo-Seon Lee, Hyun-Su Seon, Jeong-Hyeon Kim, Kyung-Joong Kim","doi":"10.1109/CIG.2015.7317894","DOIUrl":"https://doi.org/10.1109/CIG.2015.7317894","url":null,"abstract":"The one of important issues in artificial intelligence (AI) research is the development of AI for games because of its difficulty. To promote the research on video games AI, there have been several game AI competitions. However, some games with physics engine (geometry friends or Angry Birds) have no support on the prediction of future events using simulation. It makes much difficult to build AI for the games with physics. As a result, AI creator should spend much time to optimize the parameters of their program by trial and errors. In this paper, we report our approach to build AI for Angry Birds (Plan A+, 3rd rank in 2014 Angry Birds AI competition and the first entry achieved 1 million points in benchmarking test). In our controller, we adopt multiple strategies to increase generalization ability and hybrid optimization techniques (greedy search from human's manually tuned parameters) with parallel machines.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124562239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Testing reliability of replay-based imitation for StarCraft 测试《星际争霸》基于重玩的模仿的可靠性
2015 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2015-11-05 DOI: 10.1109/CIG.2015.7317899
In-Seok Oh, Kyung-Joong Kim
{"title":"Testing reliability of replay-based imitation for StarCraft","authors":"In-Seok Oh, Kyung-Joong Kim","doi":"10.1109/CIG.2015.7317899","DOIUrl":"https://doi.org/10.1109/CIG.2015.7317899","url":null,"abstract":"For StarCraft, it's easy to download lots of replays from gaming portals. Using simple tools, it's possible to extract all the gaming events stored in the replays. At each frame, it can tell us the human player's decision making given game states. Instead of making hard-coded AIs, it's promising to imitate the human player's decision recorded in the replays. In this study, we propose to create an AI bot imitates human player's high-level decisions (attack or retreat) on a group of units from replays. As a first step, we tested the reliability of the imitation system using replays from portals. We reported the ratio of apparent mistakes from the imitation system and the way to reduce the error.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125188411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Evolving a general electronic stability program for car simulated in TORCS 在TORCS中模拟汽车通用电子稳定程序的发展
2015 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2015-11-05 DOI: 10.1109/CIG.2015.7317955
Jilin Huang, I. Tanev, K. Shimohara
{"title":"Evolving a general electronic stability program for car simulated in TORCS","authors":"Jilin Huang, I. Tanev, K. Shimohara","doi":"10.1109/CIG.2015.7317955","DOIUrl":"https://doi.org/10.1109/CIG.2015.7317955","url":null,"abstract":"We present an approach of evolving (via Genetic Programming, GP) the electronic stability program (ESP) of a car, realistically simulated in The Open Racing Car Simulator (TORCS). ESP is intended to assist the yaw rotation of an unstable (e.g., either understeering or oversteering) car in low-grip, slippery road conditions by applying a carefully-timed asymmetrical braking forces to its wheels. In the proposed approach, the amount of ESP-induced brake force is represented as an evolvable (via GP) algebraic function (brake force function BFF) of the values of parameters, pertinent to the state of the car, and their derivatives. In order to obtain a general BFF, i.e., a function that result in a handling of the car, that is better than that of non ESP car, for a wide range of conditions, we evaluate the evolving BFF in several fitness cases representing different combinations of surface conditions and speeds of the car. The experimental results indicate that, compared to the car without ESP, the best evolved BFF of ESP offers a superior controllability - in terms of both (i) a smaller deviation from the ideal trajectory and (ii) faster average speed on a wide range of track conditions (“icy”, “snowy”, “rainy” and “dry”) and traveling speeds. Presented work could be viewed as an attempt to contribute a new functionality in TORCS that might enrich the experience of gamers by the enhanced controllability of their cars in slippery road conditions. Also, the results could be seen as a step towards the verification of the feasibility of applying GP for automated, evolutionary development of ESP.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115150651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
The rectangular seeds of Domineering 霸道的长方形种子
2015 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2015-11-05 DOI: 10.1109/CIG.2015.7317904
T. Cazenave, Jialin Liu, O. Teytaud
{"title":"The rectangular seeds of Domineering","authors":"T. Cazenave, Jialin Liu, O. Teytaud","doi":"10.1109/CIG.2015.7317904","DOIUrl":"https://doi.org/10.1109/CIG.2015.7317904","url":null,"abstract":"Recently, a methodology has been proposed for boosting the computational intelligence of randomized game-playing programs. We modify this methodology by working on rectangular, rather than square, matrices; and we apply it to the Domineering game. At CIG 2015, We propose a demo in the case of Go. Hence, players on site can contribute to the scientific validation by playing (in a double blind manner) against both the original algorithm and its boosted version.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127822399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Online learning and mining human play in complex games 在线学习和挖掘人类在复杂游戏中的玩法
2015 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2015-11-05 DOI: 10.1109/CIG.2015.7317942
M. Dobre, A. Lascarides
{"title":"Online learning and mining human play in complex games","authors":"M. Dobre, A. Lascarides","doi":"10.1109/CIG.2015.7317942","DOIUrl":"https://doi.org/10.1109/CIG.2015.7317942","url":null,"abstract":"We propose a hybrid model for automatically acquiring a policy for a complex game, which combines online learning with mining knowledge from a corpus of human game play. Our hypothesis is that a player that learns its policies by combining (online) exploration with biases towards human behaviour that's attested in a corpus of humans playing the game will outperform any agent that uses only one of the knowledge sources. During game play, the agent extracts similar moves made by players in the corpus in similar situations, and approximates their utility alongside other possible options by performing simulations from its current state. We implement and assess our model in an agent playing the complex win-lose board game Settlers of Catan, which lacks an implementation that would challenge a human expert. The results from the preliminary set of experiments illustrate the potential of such a joint model.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131208291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Learning to shoot in first person shooter games by stabilizing actions and clustering rewards for reinforcement learning 在第一人称射击游戏中,通过稳定行动和强化学习的集群奖励学习射击
2015 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2015-11-05 DOI: 10.1109/CIG.2015.7317928
F. Glavin, M. G. Madden
{"title":"Learning to shoot in first person shooter games by stabilizing actions and clustering rewards for reinforcement learning","authors":"F. Glavin, M. G. Madden","doi":"10.1109/CIG.2015.7317928","DOIUrl":"https://doi.org/10.1109/CIG.2015.7317928","url":null,"abstract":"While reinforcement learning (RL) has been applied to turn-based board games for many years, more complex games involving decision-making in real-time are beginning to receive more attention. A challenge in such environments is that the time that elapses between deciding to take an action and receiving a reward based on its outcome can be longer than the interval between successive decisions. We explore this in the context of a non-player character (NPC) in a modern first-person shooter game. Such games take place in 3D environments where players, both human and computer-controlled, compete by engaging in combat and completing task objectives. We investigate the use of RL to enable NPCs to gather experience from game-play and improve their shooting skill over time from a reward signal based on the damage caused to opponents. We propose a new method for RL updates and reward calculations, in which the updates are carried out periodically, after each shooting encounter has ended, and a new weighted-reward mechanism is used which increases the reward applied to actions that lead to damaging the opponent in successive hits in what we term “hit clusters”.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"138 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128754888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Play profiles: The effect of infinite-length games on evolution in the iterated Prisoner's Dilemma 游戏概况:迭代囚徒困境中无限长游戏对进化的影响
2015 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2015-11-05 DOI: 10.1109/CIG.2015.7317950
Lee-Ann Barlow, Jeffrey Tsang
{"title":"Play profiles: The effect of infinite-length games on evolution in the iterated Prisoner's Dilemma","authors":"Lee-Ann Barlow, Jeffrey Tsang","doi":"10.1109/CIG.2015.7317950","DOIUrl":"https://doi.org/10.1109/CIG.2015.7317950","url":null,"abstract":"It is well-known that the correct strategy in iterated Prisoner's Dilemma with a finite known number of rounds is to always defect. Evolution of Prisoner's Dilemma playing agents mirrors this: the more rounds the agents play against each other per encounter, the more likely the population will evolve to a cooperative state. Prior work has demonstrated that the result of evolution changes dramatically from very short games up to about 60-85 rounds, which yields substantially similar populations as those using 150 rounds. We extend this study using more powerful statistical tests and mathematical tools, including fingerprinting and play profiles, to consider the problem in the opposite direction: as the correct strategy in infinitely iterated Prisoner's Dilemma is to always cooperate, how many rounds are needed until evolution reflects this empirically? Within a very large plateau, from around 150 to a million rounds, evolution does not significantly change its behaviour. Surprisingly, behaviour does change again from millions to billions of rounds, but not further from billions to infinite-round games. This suggests that evolution operates on nontrivial categories of cooperativity depending on the number of rounds and the details of the representation.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114317032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
“Let's save resources!”: A dynamic, collaborative AI for a multiplayer environmental awareness game “让我们节约资源!”:用于多人环境意识游戏的动态协作AI
2015 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2015-11-05 DOI: 10.1109/CIG.2015.7317952
P. Sequeira, Francisco S. Melo, Ana Paiva
{"title":"“Let's save resources!”: A dynamic, collaborative AI for a multiplayer environmental awareness game","authors":"P. Sequeira, Francisco S. Melo, Ana Paiva","doi":"10.1109/CIG.2015.7317952","DOIUrl":"https://doi.org/10.1109/CIG.2015.7317952","url":null,"abstract":"In this paper we present a collaborative artificial intelligence (AI) module for a turn-based, multiplayer, environmental awareness game. The game is a version of the EnerCities serious game, modified in the context of a European-Union project to support sequential plays of an emphatic robotic tutor interacting with two human players in a social and pedagogical manner. For that purpose, we created an AI module capable of informing the game-playing and pedagogical decision-making of the robotic tutor. Specifically, the module includes an action planner capable of, together with a game simulator, perform forward-planning according to player preferences and current game values. Such predicted values are also used as an alert system to inform the other players of near consequences of current behaviors and advise alternative, sustainable courses of action in the game. The module also incorporates a social component that continuously models the game preferences of each player and automatically adjusts the tutor's strategy so to follow the group's “action tendency”. The proposed AI module is therefore used to inform about important aspects of the game state and also the human players actions. In this paper we overview the properties and complexity of this collaborative version of the game and detail the AI module and its components. We also report on the successes of using the proposed module for controlling the behavior of a robotic tutor in several experimental studies, including the interaction with children playing collaborative EnerCities.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123872384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信