Stephen G. Ware, Lasantha Senanayake, Rachelyn Farrell
{"title":"Causal Necessity as a Narrative Planning Step Cost Function","authors":"Stephen G. Ware, Lasantha Senanayake, Rachelyn Farrell","doi":"10.1609/aiide.v19i1.27511","DOIUrl":"https://doi.org/10.1609/aiide.v19i1.27511","url":null,"abstract":"Narrative planning generates a sequence of actions which must achieve the author's goal for the story and must be composed only of actions that make sense for the characters who take them. A causally necessary action is one that would make the plan impossible to execute if it were left out. We hypothesize that action sequences which are solutions to narrative planning problems are more likely to feature causally necessary actions than those which are not solutions. In this paper, we show that prioritizing sequences with more causally necessary actions can lead to solutions faster in ten benchmark story planning problems.","PeriodicalId":498041,"journal":{"name":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135303353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Camila Aliaga, Cristian Vidal, Gabriel K. Sepulveda, Nicolas Romero, Fernanda González, Nicolas A. Barriga
{"title":"Level Building Sidekick: An AI-Assisted Level Editor Package for Unity","authors":"Camila Aliaga, Cristian Vidal, Gabriel K. Sepulveda, Nicolas Romero, Fernanda González, Nicolas A. Barriga","doi":"10.1609/aiide.v19i1.27535","DOIUrl":"https://doi.org/10.1609/aiide.v19i1.27535","url":null,"abstract":"Developing an original video game requires high investment levels, market research, cost-effective solutions, and a quick development process. Game developers usually reach for commercial off-the-shelf components often available in the engine's marketplace to reduce costs. Mixed-initiative authoring tools allow us to combine the thoughtful work of human designers with the productivity gains of automated techniques. However, most commercial AI-assisted Procedural Content Generation tools focus on generating small independent components, and standalone research tools available for generating full game levels with state-of-the-art algorithms usually lack integration with commercial game engines. This article aims to fill this gap between industry and academia. The Level Building Sidekick (LBS) is a mixed-initiative procedural content generation tool built by our research lab in association with four small independent game studios. It has a modular software architecture that enables developers to extend it for their particular projects. The current version has two working modules for building game maps, an early version of a module for populating the level with NPCs or items, and the first stages of a quest editor module. An automated testing module is planned. LBS is distributed as an AI-Assisted videogame-level editor Unity package. Usability testing performed using the ``Think-Aloud'' methodology indicates LBS has the potential to improve game development processes convincingly. However, at this stage, the user interface and the AI recommendations could improve their intuitiveness. As a general comment, the tool is perceived as a substantial contribution to facilitating and shortening development times, compared to only using the base game engine. There is an untapped market for mixed-initiative tools that assist the game designer in creating complete game levels. We expect to fill that market for our partner development studios and provide the community with an open research and development platform in a standard game engine.","PeriodicalId":498041,"journal":{"name":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135303514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew Zhu, Lara Martin, Andrew Head, Chris Callison-Burch
{"title":"CALYPSO: LLMs as Dungeon Master's Assistants","authors":"Andrew Zhu, Lara Martin, Andrew Head, Chris Callison-Burch","doi":"10.1609/aiide.v19i1.27534","DOIUrl":"https://doi.org/10.1609/aiide.v19i1.27534","url":null,"abstract":"The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to perform multiple tasks simultaneously. The DM must digest information about the game setting and monsters, synthesize scenes to present to other players, and respond to the players' interactions with the scene. Doing all of these tasks while maintaining consistency within the narrative and story world is no small feat of human cognition, making the task tiring and unapproachable to new players. Large language models (LLMs) like GPT-3 and ChatGPT have shown remarkable abilities to generate coherent natural language text. In this paper, we conduct a formative evaluation with DMs to establish the use cases of LLMs in D&D and tabletop gaming generally. We introduce CALYPSO, a system of LLM-powered interfaces that support DMs with information and inspiration specific to their own scenario. CALYPSO distills game context into bite-sized prose and helps brainstorm ideas without distracting the DM from the game. When given access to CALYPSO, DMs reported that it generated high-fidelity text suitable for direct presentation to players, and low-fidelity ideas that the DM could develop further while maintaining their creative agency. We see CALYPSO as exemplifying a paradigm of AI-augmented tools that provide synchronous creative assistance within established game worlds, and tabletop gaming more broadly.","PeriodicalId":498041,"journal":{"name":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135303516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Branden Ingram, Clint Van Alten, Richard Klein, Benjamin Rosman
{"title":"Creating Diverse Play-Style-Centric Agents through Behavioural Cloning","authors":"Branden Ingram, Clint Van Alten, Richard Klein, Benjamin Rosman","doi":"10.1609/aiide.v19i1.27521","DOIUrl":"https://doi.org/10.1609/aiide.v19i1.27521","url":null,"abstract":"Developing diverse and realistic agents in terms of behaviour and skill is crucial for game developers to enhance player satisfaction and immersion. Traditional game design approaches involve hand-crafted solutions, while learning game-playing agents often focuses on optimizing for a single objective, or play-style. These processes typically lack intuitiveness, fail to resemble realistic behaviour, and do not encompass a diverse spectrum of play-styles at varying levels of skill. To this end, our goal is to learn a set of policies that exhibit diverse behaviours or styles while also demonstrating diversity in skill level. In this paper, we propose a novel pipeline, called PCPG (Play-style-Centric Policy Generation), which combines unsupervised play-style identification and policy learning techniques to generate a diverse set of play-style-centric agents. The agents generated by the pipeline can effectively capture the richness and diversity of gameplay experiences in multiple video game domains, showcasing identifiable and diverse play-styles at varying levels of proficiency.","PeriodicalId":498041,"journal":{"name":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"299 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135303228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Player Identification and Next-Move Prediction for Collectible Card Games with Imperfect Information","authors":"Logan Fields, John Licato","doi":"10.1609/aiide.v19i1.27500","DOIUrl":"https://doi.org/10.1609/aiide.v19i1.27500","url":null,"abstract":"Effectively identifying an individual and predicting their future actions is a material aspect of player analytics, with applications for player engagement and game security. Collectible card games are a fruitful test space for studying player identification, given that their large action spaces allow for flexibility in play styles, thereby facilitating behavioral analysis at the individual, rather than the aggregate, level. Further, once players are identified, modeling the differences between individuals may allow us to preemptively detect patterns that foretell future actions. As such, we use the virtual collectible card game \"Legends of Code and Magic\" to research both of these topics. Our main contributions to the task are the creation of a comprehensive dataset of Legends of Code and Magic game states and actions, extensive testing of the minimum information and computational methods necessary to identify an individual from their actions, and examination of the transferability of knowledge collected from a group to unknown individuals.","PeriodicalId":498041,"journal":{"name":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135303231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mechanic Maker 2.0: Reinforcement Learning for Evaluating Generated Rules","authors":"Johor Jara Gonzalez, Seth Cooper, Matthew Guzdial","doi":"10.1609/aiide.v19i1.27522","DOIUrl":"https://doi.org/10.1609/aiide.v19i1.27522","url":null,"abstract":"Automated game design (AGD), the study of automatically generating game rules, has a long history in technical games research. AGD approaches generally rely on approximations of human play, either objective functions or AI agents. Despite this, the majority of these approximators are static, meaning they do not reflect human player's ability to learn and improve in a game. In this paper, we investigate the application of Reinforcement Learning (RL) as an approximator for human play for rule generation. We recreate the classic AGD environment Mechanic Maker in Unity as a new, open-source rule generation framework. Our results demonstrate that RL produces distinct sets of rules from an A* agent baseline, which may be more usable by humans.","PeriodicalId":498041,"journal":{"name":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"208 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135303238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuwei Wang, Vadim Bulitko, Taoan Huang, Sven Koenig, Roni Stern
{"title":"Synthesizing Priority Planning Formulae for Multi-Agent Pathfinding","authors":"Shuwei Wang, Vadim Bulitko, Taoan Huang, Sven Koenig, Roni Stern","doi":"10.1609/aiide.v19i1.27532","DOIUrl":"https://doi.org/10.1609/aiide.v19i1.27532","url":null,"abstract":"Prioritized planning is a popular approach to multi-agent pathfinding. It prioritizes the agents and then repeatedly invokes a single-agent pathfinding algorithm for each agent such that it avoids the paths of higher-priority agents. Performance of prioritized planning depends critically on cleverly ordering the agents. Such an ordering is provided by a priority function. Recent work successfully used machine learning to automatically produce such a priority function given good orderings as the training data. In this paper we explore a different technique for synthesizing priority functions, namely program synthesis in the space of arithmetic formulae. We synthesize priority functions expressed as arithmetic formulae over a set of meaningful problem features via a genetic search in the space induced by a context-free grammar. Furthermore we regularize the fitness function by formula length to synthesize short, human-readable formulae. Such readability is an advantage over previous numeric machine-learning methods and may help explain the importance of features and how to combine them into a good priority function for a given domain. Moreover, our experimental results show that our formula-based priority functions outperform existing machine-learning methods on the standard benchmarks in terms of success rate, run time and solution quality without using more training data.","PeriodicalId":498041,"journal":{"name":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"208 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135303233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic Difficulty Adjustment via Procedural Level Generation Guided by a Markov Decision Process for Platformers and Roguelikes","authors":"Colan F. Biemer","doi":"10.1609/aiide.v19i1.27540","DOIUrl":"https://doi.org/10.1609/aiide.v19i1.27540","url":null,"abstract":"Procedural level generation can create unseen levels and improve the replayability of games, but there are requirements for a generated level. First, a level must be completable. Second, a level must look and feel like a level that would exist in the game, meaning a random combination of tiles that happens to be completable is not enough. On top of these two requirements, though, is the player experience. If a level is too hard, the player will be frustrated. If too easy, they will be bored. Neither outcome is desirable. A procedural level generation system has to account for the player's skill and generate levels at the correct difficulty. I address this issue by showing how a Markov Decision Process can be used as a director to assemble levels tailored to a player's skill level, but I've only demonstrated that my approach works with surrogate agents. For my thesis, I plan to build on my past work by creating a full roguelike and platformer and running two player studies to validate my approach.","PeriodicalId":498041,"journal":{"name":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135303239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating Player Experience in Stealth Games: Dynamic Guard Patrol Behavior Study","authors":"Wael Al Enezi, Clark Verbrugge","doi":"10.1609/aiide.v19i1.27513","DOIUrl":"https://doi.org/10.1609/aiide.v19i1.27513","url":null,"abstract":"In stealth games, guard patrol behavior constitutes one of the primary challenges players encounter. While most stealth games employ hard-coded guard behaviors, the same approach is not feasible for procedurally generated environments. Previous research has introduced various dynamic guard patrol behaviors; however, there needs to be more play-testing to quantitatively measure their impact on players. This research paper presents a user study to evaluate players' experiences in terms of enjoyment and difficulty when playing against several dynamic patrol behaviors in a stealth game prototype. The study aimed to determine whether players could differentiate between different guard behaviors and assess their impact on player experience. We found that players were generally capable of distinguishing between the various dynamic guard patrol behaviors in terms of difficulty and enjoyment when competing against them. The study sheds light on the nuances of player perception and experience with different guard behaviors, providing valuable insights for game developers seeking to create engaging and challenging stealth gameplay.","PeriodicalId":498041,"journal":{"name":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"299 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135303349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatically Defining Game Action Spaces for Exploration Using Program Analysis","authors":"Sasha Volokh, William G.J. Halfond","doi":"10.1609/aiide.v19i1.27510","DOIUrl":"https://doi.org/10.1609/aiide.v19i1.27510","url":null,"abstract":"The capability to automatically explore different possible game states and functionality is valuable for the automated test and analysis of computer games. However, automatic exploration requires an exploration agent to be capable of determining and performing the possible actions in game states, for which a model is typically unavailable in games built with traditional game engines. Therefore, existing work on automatic exploration typically either manually defines a game's action space or imprecisely guesses the possible actions. In this paper we propose a program analysis technique compatible with traditional game engines, which automatically analyzes the user input handling logic present in a game to determine a discrete action space corresponding to the possible user inputs, along with the conditions under which the actions are valid, and the relevant user inputs to simulate on the game to perform a chosen action. We implemented a prototype of our approach capable of producing the action spaces of Gym environments for Unity games, then evaluated the exploration performance enabled by our technique for random exploration and exploration via curiosity-driven reinforcement learning agents. Our results show that for most games, our analysis enables exploration performance that matches or exceeds that of manually engineered action spaces, and the analysis is fast enough for real time game play.","PeriodicalId":498041,"journal":{"name":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"2020 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135303525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}