{"title":"Personality traits in plots with nondeterministic planning for interactive storytelling","authors":"Fabio A. Guilherme da Silva, B. Feijó","doi":"10.1109/CIG.2015.7317957","DOIUrl":"https://doi.org/10.1109/CIG.2015.7317957","url":null,"abstract":"Interactive storytelling is a form of digital entertainment in which users participate in the process of composing and dramatizing a story. In this context, determining the characters' behaviour according to their individual preferences can be an interesting way of generating plausible stories where the characters act in a believable manner. Diversity of stories and opportunities for interaction are key requirements to be considered when designing such applications. This work presents concepts and a prototype for the generation and dramatization of interactive nondeterministic plots, using a model of personality traits that serves to guide the actions of the characters presented by the plan generation algorithm. Also, to improve the quality and diversity level of the stories, characters are able to evolve in terms of their personality traits as the plot unfolds, as a reflection of the events they perform or are exposed to.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114656766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prediction as faster perception in a real-time fighting video game","authors":"K. Asayama, K. Moriyama, Ken-ichi Fukui, M. Numao","doi":"10.1109/CIG.2015.7317672","DOIUrl":"https://doi.org/10.1109/CIG.2015.7317672","url":null,"abstract":"In a real-time video game, AI-controlled players, called agents, are still inferior to skilled human players on equal footing. In this work, we aim to construct a strong agent enough to fight with skilled human players in a real-time fighting video game. First we investigate the relation between perception speed and performance. From a simulation using two agents one of which has delayed perception, we know that perception speed is a critical factor in performance. Moreover, it means that it is effective to predict the opponent's behavior to enhance the agent. Therefore, we construct an agent that predicts its opponent's position and action in a fighting video game. The agent uses linear extrapolation to predict the position and the k-nearest neighbor method to predict the action. Comparing agents with and without the prediction ability, we see that the predicting agent mostly obtained higher scores than the non-predicting one in fighting with six contestants of a previous competition of the game.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116915756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Regulation of exploration for simple regret minimization in Monte-Carlo tree search","authors":"Yun-Ching Liu, Yoshimasa Tsuruoka","doi":"10.1109/CIG.2015.7317923","DOIUrl":"https://doi.org/10.1109/CIG.2015.7317923","url":null,"abstract":"The application of multi-armed bandit (MAB) algorithms was a critical step in the development of Monte-Carlo tree search (MCTS). One example would be the UCT algorithm, which applies the UCB bandit algorithm. Various research has been conducted on applying other bandit algorithms to MCTS. Simple regret bandit algorithms, which aim to identify the optimal arm after a number of trials, have been of great interest in various fields in recent years. However, the simple regret bandit algorithm has the tendency to spend more time on sampling suboptimal arms, which may be a problem in the context of game tree search. In this research, we will propose combined confidence bounds, which utilize the characteristics of the confidence bounds of the improved UCB and UCB √· algorithms to regulate exploration for simple regret minimization in MCTS. We will demonstrate the combined confidence bounds bandit algorithm has better empirical performance than that of the UCB algorithm on the MAB problem. We will show that the combined confidence bounds MCTS (CCB-MCTS) has better performance over plain UCT on the game of 9 × 9 Go, and has shown good scalability. We will also show that the performance of CCB-MCTS can be further enhanced with the application of all-moves-as-first (AMAF) heuristic.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114552786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ACE-RL-Checkers: Improving automatic case elicitation through knowledge obtained by reinforcement learning in player agents","authors":"H. C. Neto, Rita Maria Silva Julia","doi":"10.1109/CIG.2015.7317926","DOIUrl":"https://doi.org/10.1109/CIG.2015.7317926","url":null,"abstract":"This work proposes a new approach that combines Automatic Case Elicitation with Reinforcement Learning applied to Checkers player agents. This type of combination brings forth the following modifications in relation to those agents that use each of these techniques in isolation: improve the random exploration performed by the Automatic Case Elicitation-based agents and introduce adaptability to the Reinforcement Learning-based agents. In line with the above, the authors present herein the ACE-RL-Checkers player agent, a hybrid system that combines the best abilities from the automatic Checkers players CHEBR and LS-VisionDraughts. CHEBR is an Automatic Case Elicitation-based agent with a learning approach that performs random exploration in the search space. These random explorations allow the agent to present an extremely adaptive and non-deterministic behavior. On the other hand, the high frequency at which decisions are made randomly (mainly in those phases in which the content of the case library is still so scarce) compromises the agent in terms of maintaining a good performance. LS-VisionDraughts is a Multi-Layer Perceptron Neural Network player trained through Reinforcement Learning. Besides having been proven efficient in making decisions, such an agent presents an inconvenience in that it is completely predictable, as the same move is always executed when presented with the same board of play. By combining the best abilities from these players, ACE-RL-Checkers uses knowledge provided from LS-VisionDraughts in order to direct random exploration of the automatic case elicitation technique to more promising regions in the search space. Therewith, the ACE-RL-Checkers gains in terms of performance as well as acquires adaptability in its decision-making - choosing moves based on the current game dynamics. Experiments carried out in tournaments involving these agents confirm the performance superiority of ACE-RL-Checkers when pitted against its adversaries.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126322380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simon Chauvin, G. Levieux, Jean-Yves Donnart, S. Natkin
{"title":"Making sense of emergent narratives: An architecture supporting player-triggered narrative processes","authors":"Simon Chauvin, G. Levieux, Jean-Yves Donnart, S. Natkin","doi":"10.1109/CIG.2015.7317936","DOIUrl":"https://doi.org/10.1109/CIG.2015.7317936","url":null,"abstract":"Emergent games have the particularity to allow more possible situations to emerge than progression games do. Coupled with procedural content generation techniques they also tend to increase the number of possible situations that players can encounter., However, in case the player is not creative or lucky enough these many emergent situations can have a low narrative value. This article addresses this problem through an architecture that gives players more responsibilities towards the story by allowing them to trigger Narrative Processes. A Narrative Process is a script capable of making meaningful modifications to the story in real time. Our proposed architecture relies on an Interpretation Engine whose role is to make sense of the emergent world as it is changing and inform the Narrative Processes with high level story concepts such as actors and places., We first cover the basics of emergent games and interactive narratives and then present the architecture behind the Narrative Processes as well as the Interpretation Engine. We conclude by a discussion of the potential impact of our architecture on the fundamental characteristics of emergent games.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"82 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131139283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A strongly typed GP-based video game player","authors":"Baozhu Jia, M. Ebner","doi":"10.1109/CIG.2015.7317920","DOIUrl":"https://doi.org/10.1109/CIG.2015.7317920","url":null,"abstract":"This paper attempts to evolve a general video game player, i.e. an agent which is able to learn to play many different video games with little domain knowledge. Our project uses strongly typed genetic programming as a learning algorithm. Three simple hand-crafted features are chosen to represent the game state. Each feature is a vector which consists of the position and orientation of each game object that is visible on the screen. These feature vectors are handed to the learning algorithm which will output the action the game player will take next. Game knowledge and feature vectors are acquired by processing screen grabs from the game. Three different video games are used to test the algorithm. Experiments show that our algorithm is able to find solutions to play all these three games efficiently.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131222334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Martin-Niedecken, René Bauer, Ralf Mauerhofer, U. Götz
{"title":"“RehabConnex”: A middleware for the flexible connection of multimodal game applications with input devices used in movement therapy and physical exercising","authors":"A. Martin-Niedecken, René Bauer, Ralf Mauerhofer, U. Götz","doi":"10.1109/CIG.2015.7317671","DOIUrl":"https://doi.org/10.1109/CIG.2015.7317671","url":null,"abstract":"“RehabConnex” is a middleware product, developed specifically to facilitate communication between hardware devices and multimodal game applications used in movement therapy and physical exercise. “RehabConnex“ is the core key development of the “IMIC”-project (Innovative Movement Therapy in Childhood), created by an interdisciplinary team of university partners to allow flexible connection between rehabilitation game environments and movement therapy robots for multimodal gameplay. “RehabConnex” both allows the patient to experience multimodal “human-robot-game-interaction” (HRGI) and helps the therapist to regulate and monitor the processes. In addition to its benefits for a broad range of game-based rehabilitation scenarios, the development of “RehabConnex” opens up similar perspectives for its use in multimodal game-based physical exercising (Exergames) and for other “human-device-game-interactions” (HDGI). “RehabConnex” yields innovative research questions on the general effects of multimodal environments.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133142401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning a game commentary generator with grounded move expressions","authors":"Hirotaka Kameko, Shinsuke Mori, Yoshimasa Tsuruoka","doi":"10.1109/CIG.2015.7317930","DOIUrl":"https://doi.org/10.1109/CIG.2015.7317930","url":null,"abstract":"This paper describes a machine learning-based approach for generating natural language comments on Shogi games. We generate comments by using a discriminative language model trained with a large amount of Shogi game records and comments made by human experts. Central to our method is accurate mapping of move expressions appearing in experts' comments to game states (i.e. positions) of Shogi, because the discriminative language model is trained with textual expressions paired with corresponding Shogi positions. We describe how such mapping can be performed by using evaluation information obtained from a Shogi program. Experimental results show that we can actually generate helpful comments for some positions.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124169839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EnHiC: An enforced hill climbing based system for general game playing","authors":"Amin Babadi, B. Omoomi, G. Kendall","doi":"10.1109/CIG.2015.7317907","DOIUrl":"https://doi.org/10.1109/CIG.2015.7317907","url":null,"abstract":"Accurate decision making in games has always been a very complex and yet interesting problem in Artificial Intelligence (AI). General video game playing (GVGP) is a new branch of AI whose target is to design agents that are able to win in every unknown game environment by choosing wise decisions. This paper proposes a new search methodology based on enforced hill climbing for using in GVGP and we evaluate its performance on the benchmarks of the general video game AI competition (GVG-AI). Also a simple and efficient heuristic function for GVGP is proposed. The results show that EnHiC outperforms several well-known and successful methods in the GVG-AI competition.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114407760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cheong-mok Bae, Eun Kwang Kim, Jongchan Lee, Kyung-Joong Kim, J. Na
{"title":"Generation of an arbitrary shaped large maze by assembling mazes","authors":"Cheong-mok Bae, Eun Kwang Kim, Jongchan Lee, Kyung-Joong Kim, J. Na","doi":"10.1109/CIG.2015.7317901","DOIUrl":"https://doi.org/10.1109/CIG.2015.7317901","url":null,"abstract":"Lots of games have used maze generation for their maps, geographical features or terrains. Previously, they used spanning tree algorithms, growing tree algorithm, and so on. However, the methods placed restrictions on the shape of mazes and used fixed matrices. In this paper, we propose a simple and easy way to generate an arbitrary shaped maze by assembling mazes. Because it's possible to selectively combining mazes with user preference, the final big maze can be expected to be a personalized one. If a database of mazes with gamer's playing logs or preferences is available, the system can dynamically compose an arbitrary new mazes in real time. Furthermore, it can support gamers to create new complex mazes for user-generated contents.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129808927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}