{"title":"Learning Human Motion Models","authors":"Bulent Tastan","doi":"10.1609/aiide.v8i6.12484","DOIUrl":"https://doi.org/10.1609/aiide.v8i6.12484","url":null,"abstract":"\u0000 \u0000 My research is focused on using human navigation data ingames and simulation to learn motion models from trajectorydata. These motion models can be used to: 1) track the opponent’smovement during periods of network occlusion; 2)learn combat tactics by demonstration; 3) guide the planningprocess when the goal is to intercept the opponent. A trainingset of example motion trajectories is used to learn twotypes of parameterized models: 1) a second order dynamicalsteering model or 2) the reward vector for a Markov DecisionProcess. Candidate paths from the model serve as themotion model in a set of particle filters for predicting the opponent’slocation at different time horizons. Incorporating theproposed motion models into game bots allows them to customizestheir tactics for specific human players and functionas more capable teammates and adversaries.\u0000 \u0000","PeriodicalId":249108,"journal":{"name":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130212274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Dataset for StarCraft AI and an Example of Armies Clustering","authors":"Gabriel Synnaeve, P. Bessière","doi":"10.1609/aiide.v8i3.12546","DOIUrl":"https://doi.org/10.1609/aiide.v8i3.12546","url":null,"abstract":"\u0000 \u0000 This paper advocates the exploration of the full state of recorded real-time strategy (RTS) games, by human or robotic players, to discover how to reason about tactics and strategy. We present a dataset of StarCraft games encompassing the most of the games' state (not only player’s orders). We explain one of the possible usages of this dataset by clustering armies on their compositions. This reduction of armies compositions to mixtures of Gaussian allow for strate- gic reasoning at the level of the components. We evaluated this clustering method by predicting the outcomes of battles based on armies compositions' mixtures components.\u0000 \u0000","PeriodicalId":249108,"journal":{"name":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129429505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quantum Composition and Improvisation","authors":"D. Parson","doi":"10.1609/aiide.v8i4.12552","DOIUrl":"https://doi.org/10.1609/aiide.v8i4.12552","url":null,"abstract":"\u0000 \u0000 Quantum mechanical systems exist as superpositions of complementary states that collapse to classical, concrete states upon becoming entangled with the measurement apparatus of observer-participants. A musical composition and its performance constitute a quantum system. Historically, conventional musical notation has presented the appearance of a composition as a deterministic, concrete entity, with interpretation approached as an extrinsic act. This historical perspective inhabits a subspace of the available quantum space. A quantum musical system unifies the composition, instruments, situated performance and perception as a superposition of musical events that collapses to concrete musical events via the interactions and perceptions of performers and audience. A composer captures superposed musical events via implicit or explicit conditional event probabilities, and human and/or machine performers create music by collapsing interrelated probabilities to zeros and ones via observer-participancy.\u0000 \u0000","PeriodicalId":249108,"journal":{"name":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132904310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrating Formal Qualitative Analysis Techniques within a Procedural Narrative Generation System","authors":"Ben A. Kybartas, Clark Verbrugge","doi":"10.1609/aiide.v9i4.12623","DOIUrl":"https://doi.org/10.1609/aiide.v9i4.12623","url":null,"abstract":"\u0000 \u0000 Qualitative analysis of procedurally generated narratives remains a difficult hurdle for most narrative generation tools. Typical analysis involves the use of human studies, rating the quality of the generated narratives against a given set of criteria, a costly and time consuming process. In this paper we integrate a set of features within the ReGEN system which aim to ensure narrative correctness and quality. Correct generation is ensured by performing an analysis of the preconditions and postconditions of each narrative event. Narrative quality is ensured by using an existing set of formal metrics which relate quality to the structure of the narrative to guide narrative generation. This quantitative approach provides an objective means of guaranteeing quality within narrative generation.\u0000 \u0000","PeriodicalId":249108,"journal":{"name":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"16 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132269211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive Musical Partner: A Demonstration of Musical Personality Settings for Influencing the Behavior of an Interactive Musical Generation System","authors":"J. Albert","doi":"10.1609/aiide.v9i5.12641","DOIUrl":"https://doi.org/10.1609/aiide.v9i5.12641","url":null,"abstract":"\u0000 \u0000 The Interactive Musical Partner (IMP) is software designed for use in duo improvisations, with one human improviser and one instance of IMP, focusing on a freely improvised duo aesthetic. IMP has Musical Personality Settings (MPS) that can be set prior to performance, and these MPS guide the way IMP responds to musical input. The MPS also govern the probability of particular outcomes from IMP's creative algorithms. The IMP uses audio data feature extraction methods to listen to the human partner, and react to, or ignore, the human’s musical input, based on the current MPS. This demonstration shows how the MPS interface with IMP's generative algorithm.\u0000 \u0000","PeriodicalId":249108,"journal":{"name":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130403305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using the Creative Process for Sound Design Based on Generic Sound Form","authors":"G. Mazzola, F. Thalmann","doi":"10.1609/aiide.v9i5.12659","DOIUrl":"https://doi.org/10.1609/aiide.v9i5.12659","url":null,"abstract":"Building on recent research in musical creativity and the composition process, this paper presents a specific practical application of our theory and software to sound design. The BigBang rubette module that brings gestural music composition methods to the Rubato Composer software was recently generalized in order to work with any kinds of musical and non-musical objects. Here, we focus on time-independent sound objects to illustrate several levels of metacreativity. On the one hand, we show a sample process of designing the sound objects themselves by defining appropriate datatypes, which can be done at runtime. On the other hand, we demonstrate how the creative process itself, recorded by the software once the composer starts working with these sound objects, can be used for both improvisation with and automation of any defined operations and transformations.","PeriodicalId":249108,"journal":{"name":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126635275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Collaborative Puzzle Game to Study Situated Dialog","authors":"A. Danise, Kristina Striegnitz","doi":"10.1609/aiide.v8i5.12577","DOIUrl":"https://doi.org/10.1609/aiide.v8i5.12577","url":null,"abstract":"\u0000 \u0000 This paper describes a prototype of a two-player collaborative 2D puzzle game, designed to elicit task-oriented situated dialog. In this game players use a text-based chat to coordinate their actions in pushing a ball through a maze of obstacles. The game will be used to collect corpora of human-human interactions in this environment. The data will be used to study how language with actions are interleaved and influence each other in situated dialog. The ultimate goal is to build a computational model of these behaviors.\u0000 \u0000","PeriodicalId":249108,"journal":{"name":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126871406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ember, Toward Salience-Based Cinematic Generation","authors":"B. Cassell, R. Young","doi":"10.1609/aiide.v9i4.12630","DOIUrl":"https://doi.org/10.1609/aiide.v9i4.12630","url":null,"abstract":"\u0000 \u0000 Automatic cinematic generation for virtual environments and games has shown to be capable of generating general purpose cinematics. There is initial work that approaches cinematic generation from the perspective of narrative instead of low level camera manipulation. In our work, we further extend this idea to take into consideration a model of user memory. Models of reader comprehension and memory have been developed to attempt to explain how people comprehend narratives. We propose using these models of narrative comprehension and memory to augment a system for cinematic generation so that it can produce a cinematic that can communicate character deliberation to the viewer by maintaining the salience of specific events.\u0000 \u0000","PeriodicalId":249108,"journal":{"name":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121028875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reader-Model-Based Story Generation","authors":"Peter A. Mawhorter","doi":"10.1609/aiide.v9i6.12606","DOIUrl":"https://doi.org/10.1609/aiide.v9i6.12606","url":null,"abstract":"\u0000 \u0000 Several existing systems have used reader models for story generation, but they have focused on either interactive contexts or pure discourse-level manipulation. I intend to buid a reader-model story generator that not only applies reader modelling to full plot generation, but which also draws on theories about intentionality and emotions put forward by Lisa Zunshine, Keith Oatley, and Raymond Mar. To evaluate the contributions of the reader model, I'll compare it with human-authored stories using measures of reader engagement. I'll also run the model on human-authored stories and compare the results to a human gold-standard analysis.\u0000 \u0000","PeriodicalId":249108,"journal":{"name":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"46 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114013147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tracking Creative Musical Structure: The Hunt for the Intrinsically Motivated Generative Agent","authors":"Benjamin D. Smith","doi":"10.1609/aiide.v9i5.12648","DOIUrl":"https://doi.org/10.1609/aiide.v9i5.12648","url":null,"abstract":"\u0000 \u0000 Neural networks have been employed to learn, generalize, and generate musical pieces with a constrained notion of creativity. Yet, these computational models typically suffer from an inability to characterize and reproduce long-term dependencies indicative of musical structure. Hierarchical and deep learning models propose to remedy this deficiency, but remain to be adequately proven. We describe and examine a novel dynamic bayesian network model with the goal of learning and reproducing longer-term formal musical structures. Incorporating a computational model of intrinsic motivation and novelty, this hierarchical probabilistic model is able to generate pastiches based on exemplars.\u0000 \u0000","PeriodicalId":249108,"journal":{"name":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122602155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}