2012 IEEE Conference on Computational Intelligence and Games (CIG)最新文献

筛选
英文 中文
Generating interesting Monopoly boards from open data 从开放数据生成有趣的大富翁板
2012 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2012-12-06 DOI: 10.1109/CIG.2012.6374168
Marie Gustafsson Friberger, J. Togelius
{"title":"Generating interesting Monopoly boards from open data","authors":"Marie Gustafsson Friberger, J. Togelius","doi":"10.1109/CIG.2012.6374168","DOIUrl":"https://doi.org/10.1109/CIG.2012.6374168","url":null,"abstract":"With increasing amounts of open data, especially where data can be connected with various additional information resources, new ways of visualizing and making sense of this data become possible and necessary. This paper proposes, discusses and exemplifies the concept of data games, games that allow the player(s) to explore data that is derived from outside the game, by transforming the data into something that can be played with. The transformation takes the form of procedural content generation based on real-world data. As an example of a data game, we describe Open Data Monopoly, a game board generator that uses economic and social indicator data for local governments in the UK. Game boards are generated by first collecting user input on which indicators to use and how to weigh them, as well as what criteria should be used for street selection. Sets of streets are then evolved that maximize the selected criteria, and ordered according to “prosperity” as defined subjectively by the user. Chance and community cards are created based on auxiliary data about the local political entities.","PeriodicalId":288052,"journal":{"name":"2012 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129350726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Enhancements for Monte-Carlo Tree Search in Ms Pac-Man 在吃豆人女士中蒙特卡洛树搜索的增强
2012 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2012-12-06 DOI: 10.1109/CIG.2012.6374165
Tom Pepels, M. Winands
{"title":"Enhancements for Monte-Carlo Tree Search in Ms Pac-Man","authors":"Tom Pepels, M. Winands","doi":"10.1109/CIG.2012.6374165","DOIUrl":"https://doi.org/10.1109/CIG.2012.6374165","url":null,"abstract":"In this paper enhancements for the Monte-Carlo Tree Search (MCTS) framework are investigated to play Ms Pac-Man. MCTS is used to find an optimal path for an agent at each turn, determining the move to make based on randomised simulations. Ms Pac-Man is a real-time arcade game, in which the protagonist has several independent goals but no conclusive terminal state. Unlike games such as Chess or Go there is no state in which the player wins the game. Furthermore, the Pac-Man agent has to compete with a range of different ghost agents, hence limited assumptions can be made about the opponent's behaviour. In order to expand the capabilities of existing MCTS agents, five enhancements are discussed: 1) a variable depth tree, 2) playout strategies for the ghost-team and Pac-Man, 3) including long-term goals in scoring, 4) endgame tactics, and 5) a Last-Good-Reply policy for memorising rewarding moves during playouts. An average performance gain of 40,962 points, compared to the average score of the top scoring Pac-Man agent during the CIG'11, is achieved by employing these methods.","PeriodicalId":288052,"journal":{"name":"2012 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123448358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
A Monte-Carlo path planner for dynamic and partially observable environments 动态和部分可观察环境的蒙特卡罗路径规划器
2012 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2012-12-06 DOI: 10.1109/CIG.2012.6374158
M. Naveed, D. Kitchin, A. Crampton, L. Chrpa, P. Gregory
{"title":"A Monte-Carlo path planner for dynamic and partially observable environments","authors":"M. Naveed, D. Kitchin, A. Crampton, L. Chrpa, P. Gregory","doi":"10.1109/CIG.2012.6374158","DOIUrl":"https://doi.org/10.1109/CIG.2012.6374158","url":null,"abstract":"In this paper, we present a Monte-Carlo policy rollout technique (called MOCART-CGA) for path planning in dynamic and partially observable real-time environments such as Real-time Strategy games. The emphasis is put on fast action selection motivating the use of Monte-Carlo techniques in MOCART-CGA. Exploration of the space is guided by using corridors which direct simulations in the neighbourhood of the best found moves. MOCART-CGA limits how many times a particular state-action pair is explored to balance exploration of the neighbourhood of the state and exploitation of promising actions. MOCART-CGA is evaluated using four standard pathfinding benchmark maps, and over 1000 instances. The empirical results show that MOCART-CGA outperforms existing techniques, in terms of search time, in dynamic and partially observable environments. Experiments have also been performed in static (and partially observable) environments where MOCART-CGA still requires less time to search than its competitors, but typically finds lower quality plans.","PeriodicalId":288052,"journal":{"name":"2012 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123483352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Monte Carlo Tree Search: Long-term versus short-term planning 蒙特卡洛树搜索:长期与短期规划
2012 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2012-12-06 DOI: 10.1109/CIG.2012.6374159
Diego Perez Liebana, Philipp Rohlfshagen, S. Lucas
{"title":"Monte Carlo Tree Search: Long-term versus short-term planning","authors":"Diego Perez Liebana, Philipp Rohlfshagen, S. Lucas","doi":"10.1109/CIG.2012.6374159","DOIUrl":"https://doi.org/10.1109/CIG.2012.6374159","url":null,"abstract":"In this paper we investigate the use of Monte Carlo Tree Search (MCTS) on the Physical Travelling Salesman Problem (PTSP), a real-time game where the player navigates a ship across a map full of obstacles in order to visit a series of waypoints as quickly as possible. In particular, we assess the algorithm's ability to plan ahead and subsequently solve the two major constituents of the PTSP: the order of waypoints (long-term planning) and driving the ship (short-term planning). We show that MCTS can provide better results when these problems are treated separately: the optimal order of cities is found using Branch & Bound and the ship is navigated to collect the waypoints using MCTS. We also demonstrate that the physics of the PTSP game impose a challenge regarding the optimal order of cities and propose a solution that obtains better results than following the TSP route of minimum Euclidean distance.","PeriodicalId":288052,"journal":{"name":"2012 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128363886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Playing PuyoPuyo: Two search algorithms for constructing chain and tactical heuristics 普约游戏:构造链和战术启发式的两种搜索算法
2012 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2012-12-06 DOI: 10.1109/CIG.2012.6374140
Kokolo Ikeda, Daisuke Tomizawa, Simon Viennot, Yuu Tanaka
{"title":"Playing PuyoPuyo: Two search algorithms for constructing chain and tactical heuristics","authors":"Kokolo Ikeda, Daisuke Tomizawa, Simon Viennot, Yuu Tanaka","doi":"10.1109/CIG.2012.6374140","DOIUrl":"https://doi.org/10.1109/CIG.2012.6374140","url":null,"abstract":"Tetris is one of the most famous tile-matching video games, and has been used as a test bed for artificial intelligence techniques such as machine learning. Many games have been derived from such early tile-matching games, in this paper we discuss how to develop AI players of \"PuyoPuyo\". PuyoPuyo is a popular two-player game, and where the main point is to construct a \"chain\" longer than the opponent. We introduce two tree search algorithms and some tactical heuristics for improving the performance. We were able to reach an average chain length of 11, notably higher than that of the commercial Als.","PeriodicalId":288052,"journal":{"name":"2012 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130082492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic design of deterministic sequences of decisions for a repeated imitation game with action-state dependency 具有动作-状态依赖关系的重复模仿博弈中确定性决策序列的自动设计
2012 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2012-12-06 DOI: 10.1109/CIG.2012.6374131
Pablo J. Villacorta, Luis Quesada, D. Pelta
{"title":"Automatic design of deterministic sequences of decisions for a repeated imitation game with action-state dependency","authors":"Pablo J. Villacorta, Luis Quesada, D. Pelta","doi":"10.1109/CIG.2012.6374131","DOIUrl":"https://doi.org/10.1109/CIG.2012.6374131","url":null,"abstract":"A repeated conflicting situation between two agents is presented in the context of adversarial decision making. The agents simultaneously choose an action as a response to an external event, and accumulate some payoff for their decisions. The next event statistically depends on the last choices of the agents. The objective of the first agent, called the imitator, is to imitate the behaviour of the other. The second agent tries not to be properly predicted while, at the same time, choosing actions that report a high payoff. When the situation is repeated through time, the imitator has the opportunity to learn the adversary's behaviour. In this work, we present a way to automatically design a sequence of deterministic decisions for one of the agents maximizing the expected payoff while keeping his choices difficult to predict. Determinism provides some practical advantages over partially randomized strategies investigated in previous works, mainly the reduction of the variance of the payoff when using the strategy.","PeriodicalId":288052,"journal":{"name":"2012 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116061019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-adaptive games for rehabilitation at home 自适应康复游戏在家里
2012 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2012-12-06 DOI: 10.1109/CIG.2012.6374154
Michele Pirovano, R. Mainetti, G. Baud-Bovy, P. Lanzi, N. A. Borghese
{"title":"Self-adaptive games for rehabilitation at home","authors":"Michele Pirovano, R. Mainetti, G. Baud-Bovy, P. Lanzi, N. A. Borghese","doi":"10.1109/CIG.2012.6374154","DOIUrl":"https://doi.org/10.1109/CIG.2012.6374154","url":null,"abstract":"Computer games are a promising tool to support rehabilitation at home. It is widely recognized that rehabilitation games should (i) be nicely integrated in general-purpose rehabilitation stations, (ii) adhere to the constraints posed by the clinical protocols, (iii) involve movements that are functional to reach the rehabilitation goal, and (iv) adapt to the patients' current status and progress. However, the vast majority of existing rehabilitation games are stand-alone applications (not integrated in a patient station), that rarely adapt to the patients' condition. In this paper, we present the first prototype of the patient rehabilitation station we developed that integrates video games for rehabilitation with methods of computational intelligence both for on-line monitoring the movements' execution during the games and for adapting the gameplay to the patients' status. The station employs a fuzzy system to monitor the exercises execution, on-line, according to the clinical constraints defined by the therapist at configuration time, and to provide direct feedback to the patients. At the same time, it applies real-time adaptation (using the Quest Bayesian adaptive approach) to modify the gameplay according both (i) to the patient current performance and progress and (ii) to the exercise plan specified by the therapist. Finally, we present one of the games available in our patient stations (designed in tight cooperation with therapists) that integrates monitoring functionalities with in-game self-adaptation to provide the best support possible to patients during their routine.","PeriodicalId":288052,"journal":{"name":"2012 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121233904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 109
Progressive neural network training for the Open Racing Car Simulator 开放赛车模拟器的渐进式神经网络训练
2012 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2012-12-06 DOI: 10.1109/CIG.2012.6374146
Christos Athanasiadis, Damianos Galanopoulos, A. Tefas
{"title":"Progressive neural network training for the Open Racing Car Simulator","authors":"Christos Athanasiadis, Damianos Galanopoulos, A. Tefas","doi":"10.1109/CIG.2012.6374146","DOIUrl":"https://doi.org/10.1109/CIG.2012.6374146","url":null,"abstract":"In this paper a novel methodology for training neural networks as car racing controllers is proposed. Our effort is focused on finding a new fast and effective way to train neural networks that will avoid stacking in local minima and can learn from advanced bot-teachers to handle the basic tasks of steering and acceleration in The Open Racing Car Simulator (TORCS). The proposed approach is based on Neural Networks that learn progressively the driving behaviour of other bots. Starting with a simple rule-based decision driver, our scope is to handle its decisions with NN and increase its performance as much as possible. In order to do so, we propose a sequence of Neural networks that are gradually trained from more dexterous drivers, as well as, from the simplest to the most skillful controller. Our method is actually, an effective initialization method for Neural Networks that leads to increasingly better driving behavior. We have tested the method in several tracks of increasing difficulty. In all cases the proposed method resulted in improved bot decisions.","PeriodicalId":288052,"journal":{"name":"2012 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132659195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Evolving the optimal racing line in a high-end racing game 在高端赛车游戏中进化出最佳的赛车路线
2012 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2012-12-06 DOI: 10.1109/CIG.2012.6374145
Matteo Botta, Vincenzo Gautieri, D. Loiacono, P. Lanzi
{"title":"Evolving the optimal racing line in a high-end racing game","authors":"Matteo Botta, Vincenzo Gautieri, D. Loiacono, P. Lanzi","doi":"10.1109/CIG.2012.6374145","DOIUrl":"https://doi.org/10.1109/CIG.2012.6374145","url":null,"abstract":"Finding a racing line that allows to achieve a competitive lap-time is a key problem in real-world car racing as well as in the development of non-player characters for a commercial racing game. Unfortunately, solving this problem generally requires a domain expert and a trial-and-error process. In this work, we show how evolutionary computation can be successfully applied to solve this task in a high-end racing game. To this purpose, we introduce a novel encoding for the racing lines based on a set of connected Bezier curves. In addition, we compare two different methods to evaluate the evolved racing lines: a simulation-based fitness and an estimation-based fitness; the former does not require any previous knowledge but is rather expensive; the latter is much less expensive but requires few domain knowledge and is not completely accurate. Finally, we test our approach using The Open Racing Car Simulator (TORCS), a state-of-the-art open source simulator, as a testbed.","PeriodicalId":288052,"journal":{"name":"2012 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132016996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
The huddle: Combining AI techniques to coordinate a player's game characters 集合:结合AI技术来协调玩家的游戏角色
2012 IEEE Conference on Computational Intelligence and Games (CIG) Pub Date : 2012-12-06 DOI: 10.1109/CIG.2012.6374157
Timothy Davison, J. Denzinger
{"title":"The huddle: Combining AI techniques to coordinate a player's game characters","authors":"Timothy Davison, J. Denzinger","doi":"10.1109/CIG.2012.6374157","DOIUrl":"https://doi.org/10.1109/CIG.2012.6374157","url":null,"abstract":"We present the huddle, a concept for extending games in which the player is responsible for a group of game characters. The huddle combines several AI methods to allow the player to create a cooperative strategy for his characters to solve a scenario of the game and it takes away from the player the need to frantically jump around in controlling his characters to employ the strategy idea he has. The huddle is entered from a saved game state and allows the player to provide his characters with strategy ideas in form of situations and the actions he wants the characters to take (SAPs). A learner then uses these ideas and adds to it additional SAPs to create a complete strategy. The learner uses a simulation of the real game that uses models for the nonplayer characters based on the experiences the player had with the game, so far, to evaluate strategy candidates. We evaluated the huddle idea with a fantasy-themed role playing game and show that the huddle indeed allows a player to concentrate on his strategy while still requiring him to come up with the solution ideas for scenarios.","PeriodicalId":288052,"journal":{"name":"2012 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123733161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信