{"title":"Surrogate Infeasible Fitness Acquirement FI-2Pop for Procedural Content Generation","authors":"R. Gallotta, Kai Arulkumaran, L. Soros","doi":"10.1109/CoG51982.2022.9893592","DOIUrl":"https://doi.org/10.1109/CoG51982.2022.9893592","url":null,"abstract":"When generating content for video games using procedural content generation (PCG), the goal is to create functional assets of high quality. Prior work has commonly leveraged the feasible-infeasible two-population (FI-2Pop) constrained optimisation algorithm for PCG, sometimes in combination with the multi-dimensional archive of phenotypic-elites (MAP-Elites) algorithm for finding a set of diverse solutions. However, the fitness function for the infeasible population only takes into account the number of constraints violated. In this paper we present a variant of FI-2Pop in which a surrogate model is trained to predict the fitness of feasible children from infeasible parents, weighted by the probability of producing feasible children. This drives selection towards higher-fitness, feasible solutions. We demonstrate our method on the task of generating spaceships for Space Engineers, showing improvements over both standard FI-2Pop, and the more recent multi-emitter constrained MAP-Elites algorithm.","PeriodicalId":394281,"journal":{"name":"2022 IEEE Conference on Games (CoG)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117065874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marco Pleines, Konstantin Ramthun, Yannik Wegener, Hendrik Meyer, M. Pallasch, Sebastian Prior, Jannik Drögemüller, Leon Büttinghaus, Thilo Röthemeyer, Alexander Kaschwig, Oliver Chmurzynski, Frederik Rohkrähmer, Roman Kalkreuth, F. Zimmer, M. Preuss
{"title":"On the Verge of Solving Rocket League using Deep Reinforcement Learning and Sim-to-sim Transfer","authors":"Marco Pleines, Konstantin Ramthun, Yannik Wegener, Hendrik Meyer, M. Pallasch, Sebastian Prior, Jannik Drögemüller, Leon Büttinghaus, Thilo Röthemeyer, Alexander Kaschwig, Oliver Chmurzynski, Frederik Rohkrähmer, Roman Kalkreuth, F. Zimmer, M. Preuss","doi":"10.1109/CoG51982.2022.9893628","DOIUrl":"https://doi.org/10.1109/CoG51982.2022.9893628","url":null,"abstract":"Autonomously trained agents that are supposed to play video games reasonably well rely either on fast simulation speeds or heavy parallelization across thousands of machines running concurrently. This work explores a third way that is established in robotics, namely sim-to-real transfer, or if the game is considered a simulation itself, sim-to-sim transfer. In the case of Rocket League, we demonstrate that single behaviors of goalies and strikers can be successfully learned using Deep Reinforcement Learning in the simulation environment and transferred back to the original game. Although the implemented training simulation is to some extent inaccurate, the goalkeeping agent saves nearly 100% of its faced shots once transferred, while the striking agent scores in about 75% of cases. Therefore, the trained agent is robust enough and able to generalize to the target domain of Rocket League.","PeriodicalId":394281,"journal":{"name":"2022 IEEE Conference on Games (CoG)","volume":"168 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132291439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PGD: A Large-scale Professional Go Dataset for Data-driven Analytics","authors":"Yifan Gao","doi":"10.1109/CoG51982.2022.9893704","DOIUrl":"https://doi.org/10.1109/CoG51982.2022.9893704","url":null,"abstract":"Lee Sedol is on a winning streak–does this legend rise again after the competition with AlphaGo? Ke Jie is invincible in the world championship–can he still win the title this time? Go is one of the most popular board games in East Asia, with a stable professional sports system that has lasted for decades in China, Japan, and Korea. There are mature data-driven analysis technologies for many sports, such as soccer, basketball, and esports. However, developing such technology for Go remains nontrivial and challenging due to the lack of datasets, meta-information, and in-game statistics. This paper creates the Professional Go Dataset (PGD), containing 98,043 games played by 2,148 professional players from 1950 to 2021. After manual cleaning and labeling, we provide detailed meta-information for each player, game, and tournament. Moreover, the dataset includes analysis results for each move in the match evaluated by advanced AlphaZero-based AI. To establish a benchmark for PGD, we further analyze the data and extract meaningful in-game features based on prior knowledge related to Go that can indicate the game status. With the help of complete meta-information and constructed in-game features, our results prediction system achieves an accuracy of 75.30%, much higher than several state-of-the-art approaches (64%-65%). As far as we know, PGD is the first dataset for data-driven analytics in Go and even in board games. Beyond this promising result, we provide more examples of tasks that benefit from our dataset. The ultimate goal of this paper is to bridge this ancient game and the modern data science community. It will advance research on Go-related analytics to enhance the fan experience, help players improve their ability, and facilitate other promising aspects. The dataset will be made publicly available.","PeriodicalId":394281,"journal":{"name":"2022 IEEE Conference on Games (CoG)","volume":"175 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132527163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Wordle for Learning to Design and Compare Strategies","authors":"Chao-Lin Liu","doi":"10.1109/CoG51982.2022.9893585","DOIUrl":"https://doi.org/10.1109/CoG51982.2022.9893585","url":null,"abstract":"Wordle has become a very popular online game since November 2021. We designed and evaluated several strategies for solving Wordle in this paper. Our strategies achieved impressive performances in realistic evaluations that aimed to guess all of the known answers of the current Wordle. On average, we may solve a Wordle game with about 3.67 guesses, solve a Wordle game with six or fewer guesses higher than 98% of the time, and hit the answer with 2 or fewer guesses more than 5% of the time. In fact, our strategies are applicable to the word guessing games that are more general than the current Wordle. More importantly, we present our work in ways that our experiences may be used as classroom examples for learning to design strategies for computer games.","PeriodicalId":394281,"journal":{"name":"2022 IEEE Conference on Games (CoG)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124887756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Youpeng Zhao, Jian Zhao, Xu Hu, Wen-gang Zhou, Houqiang Li
{"title":"DouZero+: Improving DouDizhu AI by Opponent Modeling and Coach-guided Learning","authors":"Youpeng Zhao, Jian Zhao, Xu Hu, Wen-gang Zhou, Houqiang Li","doi":"10.1109/CoG51982.2022.9893710","DOIUrl":"https://doi.org/10.1109/CoG51982.2022.9893710","url":null,"abstract":"Recent years have witnessed the great breakthrough of deep reinforcement learning (DRL) in various perfect and imperfect information games. Among these games, DouDizhu, a popular card game in China, is very challenging due to the imperfect information, large state and action space as well as elements of collaboration. Recently, a DouDizhu AI system called DouZero has been proposed. Trained using traditional Monte Carlo method with deep neural networks and self-play procedure without the abstraction of human prior knowledge, DouZero has achieved the best performance among all the existing DouDizhu AI programs. In this work, we propose to enhance DouZero by introducing opponent modeling into DouZero. Besides, we propose a novel coach network to further boost the performance of DouZero and accelerate its training process. With the integration of the above two techniques into DouZero, our DouDizhu AI system achieves better performance and ranks top in the Botzone leaderboard among more than 400 AI agents, including DouZero.","PeriodicalId":394281,"journal":{"name":"2022 IEEE Conference on Games (CoG)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114197654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On Linking Level Segments","authors":"Colan F. Biemer, Seth Cooper","doi":"10.1109/CoG51982.2022.9893705","DOIUrl":"https://doi.org/10.1109/CoG51982.2022.9893705","url":null,"abstract":"An increasingly common area of study in procedural content generation is the creation of level segments: short pieces that can be used to form larger levels. Previous work has used concatenation to form these larger levels. However, even if the segments themselves are completable and well-formed, concatenation can fail to produce levels that are completable and can cause broken in-game structures (e.g. malformed pipes in Mario). We show this with three tile-based games: a side-scrolling platformer, a vertical platformer, and a top-down roguelike. To address this, we present a Markov chain and a tree search algorithm that finds a link between two level segments, which uses filters to ensure completability and unbroken in-game structures in the linked segments. We further show that these links work well for multi-segment levels. We find that this method reliably finds links between segments and is customizable to meet a designer’s needs.","PeriodicalId":394281,"journal":{"name":"2022 IEEE Conference on Games (CoG)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114819744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ibrahim Khan, T. Nguyen, Xincheng Dai, R. Thawonmas
{"title":"DareFightingICE Competition: A Fighting Game Sound Design and AI Competition","authors":"Ibrahim Khan, T. Nguyen, Xincheng Dai, R. Thawonmas","doi":"10.1109/CoG51982.2022.9893624","DOIUrl":"https://doi.org/10.1109/CoG51982.2022.9893624","url":null,"abstract":"This paper presents a new competition-at the 2022 IEEE Conference on Games (CoG)- called DareFightingICE Competition. The competition has two tracks: a sound design track and an AI track. The game platform for this competition is also called DareFightingICE, a fighting game platform. DareFightingICE is a sound-design-enhanced version of FightingICE, used earlier in a competition at CoG until 2021 to promote artificial intelligence (AI) research in fighting games. In the sound design track, participants compete for the best sound design, given the default sound design of DareFightingICE as a sample, where we define a sound design as a set of sound effects combined with the source code that implements their timing-control algorithm. Participants of the AI track are asked to develop their AI algorithm that controls a character given only sound as the input (blind AI) to fight against their opponent; a sample deep-learning blind AI will be provided by us. Our means to maximize the synergy between the two tracks are also described. This competition serves to come up with effective sound designs for visually impaired players, a group in the gaming community which has been mostly ignored. To the best of our knowledge, DareFightingICE Competition is the first of its kind within and outside of CoG.","PeriodicalId":394281,"journal":{"name":"2022 IEEE Conference on Games (CoG)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128347913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Illuminating the Space of Enemies Through MAP-Elites","authors":"Breno M. F. Viana, L. T. Pereira, C. Toledo","doi":"10.1109/CoG51982.2022.9893621","DOIUrl":"https://doi.org/10.1109/CoG51982.2022.9893621","url":null,"abstract":"Action-Adventure games have several challenges to overcome, where the most common are enemies. The enemies’ goal is to hinder the players’ progression by taking life points, and the way they hinder this progress is distinct for different kinds of enemies. In this context, this paper introduces an extended version of an evolutionary approach for procedurally generating enemies that target the enemy’s difficulty as the goal. Our approach advances the enemy generation research by incorporating a MAP-Elites population to generate diverse enemies without losing quality. The computational experiment showed the method converged most enemies in the MAP-Elites in less than a second for most cases. Besides, we experimented with players who played an Action-Adventure game prototype with enemies we generated. This experiment showed that the players enjoyed most levels they played, and we successfully created enemies perceived as easy, medium, or hard to face.","PeriodicalId":394281,"journal":{"name":"2022 IEEE Conference on Games (CoG)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126426222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Counter-Strike Deathmatch with Large-Scale Behavioural Cloning","authors":"Tim Pearce, Jun Zhu","doi":"10.1109/CoG51982.2022.9893617","DOIUrl":"https://doi.org/10.1109/CoG51982.2022.9893617","url":null,"abstract":"This paper describes an AI agent that plays the modern first-person-shooter (FPS) video game ‘Counter-Strike; Global Offensive’ (CSGO) from pixel input. The agent, a deep neural network, matches the performance of a casual human gamer on the deathmatch game mode whilst adopting a humanlike play style. Much previous research has focused on games with convenient APIs and low-resolution graphics, allowing them to be run cheaply at scale. This is not the case for CSGO, with system requirements orders of magnitude higher than previously studied FPS games. This limits the quantity of on-policy data that can be generated, precluding pure reward-driven reinforcement learning (RL) algorithms. Our solution uses a two-stage behavioural cloning methodology; 1) Pre-train on a large dataset scraped from human play on public servers (5.5 million frames or 95 hours) where actions are labelled in an automated way. 2) Fine-tune on a small dataset of clean expert demonstrations (190 thousand frames or 3 hours). This scale is an order of magnitude larger than prior work on imitation learning in FPS games, whilst being far more data efficient than pure RL algorithms. Video introduction: https://youtu.be/rnz3lmfSHv0 Code, model & datasets: https://github.com/TeaPearce","PeriodicalId":394281,"journal":{"name":"2022 IEEE Conference on Games (CoG)","volume":"96 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126096621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}