{"title":"MOBA Game Item Recommendation via Relation-aware Graph Attention Network","authors":"Lijuan Duan, Shuxin Li, Wenbo Zhang, Wenjian Wang","doi":"10.1109/CoG51982.2022.9893595","DOIUrl":"https://doi.org/10.1109/CoG51982.2022.9893595","url":null,"abstract":"Recommender systems based on graph attention networks have received increasing attention due to their excellent ability to learn various side information. However, previous work usually focused on game character recommendation without paying much attention to items. In addition, as the team of the match changes, the items used by the characters may also change. To overcome these limitations, we propose a relation-aware graph attention item recommendation method. It considers the relationship between characters and items. Furthermore, the graph attention mechanism aggregates the embeddings of items and analyzes the effects of items on related characters while assigning attention weights between characters and items. Extensive experiments on the kaggle public game dataset show that our method significantly outperforms previous methods in terms of Precision, F1 and MAP compared to other existing methods.","PeriodicalId":394281,"journal":{"name":"2022 IEEE Conference on Games (CoG)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125279903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Makhmutov, J. A. Brown, Maksim Surkov, Anton Timchenko, Kamilya Timchenko
{"title":"Adaptive Game Soundtrack Tempo Based on Players’ Actions","authors":"M. Makhmutov, J. A. Brown, Maksim Surkov, Anton Timchenko, Kamilya Timchenko","doi":"10.1109/CoG51982.2022.9893604","DOIUrl":"https://doi.org/10.1109/CoG51982.2022.9893604","url":null,"abstract":"A well-designed video game soundtrack can significantly affect human game perception, especially when there is an intuitive link between musical and game features. The soundtrack intuitiveness can be increased by making it adaptive and dependent on players’ actions. The tempo is one of the music characteristics, and this change is relatively easy to distinguish even for non-musicians because it is often interpreted as a speed. This work examines the existence of different correlations between players’ in-game actions and soundtrack tempo. Authors suppose that results of conducted playtesting with humans can improve game development from the musical side, increasing players’ engagement with the game. The playtesting is done based on a simple runner game called MAK, which was developed for scientific purposes. This research aims to find intuitive dependencies between six considered game actions and soundtrack tempo.","PeriodicalId":394281,"journal":{"name":"2022 IEEE Conference on Games (CoG)","volume":"37 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114093893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mjx: A framework for Mahjong AI research","authors":"Sotetsu Koyamada, Keigo Habara, Nao Goto, Shinri Okano, Soichiro Nishimori, Shin Ishii","doi":"10.1109/CoG51982.2022.9893712","DOIUrl":"https://doi.org/10.1109/CoG51982.2022.9893712","url":null,"abstract":"Numerous games have served as testbeds for artificial intelligence (AI) research to measure its progress. Mahjong is a highly challenging multi-agent imperfect information game with a vast player population. However, a challenge with using Mahjong as a testbed for AI is the lack of a publicly available framework that is fast, easy to use and implements popular rules for human players. We propose and describe Mjx, an open-source Mahjong framework, which implements one of the most popular Mahjong rules, riichi Mahjong (Japanese Mahjong). We compared the execution speed of Mjx with existing popular open-source software and demonstrated that it achieves 100x faster performance. Mjx is available at https://github.conmjx-project/mjx.","PeriodicalId":394281,"journal":{"name":"2022 IEEE Conference on Games (CoG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129407386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PIFE: Permutation Invariant Feature Extractor for Danmaku Games","authors":"Takuto Itoi, E. Simo-Serra","doi":"10.1109/CoG51982.2022.9893649","DOIUrl":"https://doi.org/10.1109/CoG51982.2022.9893649","url":null,"abstract":"Dealing with unstructured complex patterns provides a challenge to existing reinforcement patterns. In this research, we propose a new model to overcome the difficulty in challenging danmaku games. Touhou Project is one of the bestknown games in the bullet hell genre also known as danmaku, where a player has to dodge complex patterns of bullets on the screen. Furthermore, the agent needs to react to the environment in real-time, which made existing methods having difficulties processing the high-volume data of objects; bullets, enemies, etc. We introduce an environment for the Touhou Project game‘東方花映塚~Phantasmagoria of Flower View.’ which manipulates the memory of the running game and enables to control the character. However, the game state information consists of unstructured and unordered data not amenable for training existing reinforcement learning models, as they are not invariant to order changes in the input. To overcome this issue, we propose a new pooling-based reinforcement learning approach that is able to handle permutation invariant inputs by extracting abstract values and merging them in an order-independent way. Experimental results corroborate the effectiveness of our approach which shows significantly increased scores compared to existing baseline approaches.","PeriodicalId":394281,"journal":{"name":"2022 IEEE Conference on Games (CoG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130699337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In-Chang Baek, Taehwa Park, Taegwan Ha, Kyung-Joong Kim
{"title":"Turing Test Framework for Cooperative Games","authors":"In-Chang Baek, Taehwa Park, Taegwan Ha, Kyung-Joong Kim","doi":"10.1109/CoG51982.2022.9893684","DOIUrl":"https://doi.org/10.1109/CoG51982.2022.9893684","url":null,"abstract":"Recently, several attempts have been made to train cooperative artificial intelligence (AI). From training superhuman-level agents to human-like agents, the purpose of an AI results in differences in the behavior policy. Indeed, training a human-like agent could enhance the experience of multiplayer game players. However, training human-like agents is challenging and there is little existing work concerning benchmarking cooperative agents with actual humans. As an initial step to address this problem, we suggest a software program and an experimental procedure to conduct Turing tests in multiplayer games. Our contribution will help current multiagent studies benchmark the human-likeness of the agents and investigate their characteristics.","PeriodicalId":394281,"journal":{"name":"2022 IEEE Conference on Games (CoG)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130740064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reinforcement Learning using Reward Expectations in Scenarios with Aleatoric Uncertainties","authors":"Yubin Wang, Yifeng Sun, Jiang Wu, Hao Hu, Zhiqiang Wu, Weigui Huang","doi":"10.1109/CoG51982.2022.9893651","DOIUrl":"https://doi.org/10.1109/CoG51982.2022.9893651","url":null,"abstract":"In scenarios with aleatoric uncertainties, the reward got by an agent when executing the same action in the same state is random, which can reduce the stability and convergence speed of the reinforcement algorithms. However, in most scenarios, reward functions have regularity, and their expectations are determined, which can be got through models or sample statistics. This paper discusses the distribution relationship between reward functions and value functions in scenarios with aleatoric uncertainties and proves the feasibility of using reward expectations for reinforcement learning. Finally, experiments show that algorithms have better stability and convergence speed when using reward expectations than random rewards.","PeriodicalId":394281,"journal":{"name":"2022 IEEE Conference on Games (CoG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127429432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Realistic Game Avatars Auto-Creation from Single Images via Three-pathway Network","authors":"Jiangke Lin, Lincheng Li, Yi Yuan, Zhengxia Zou","doi":"10.1109/CoG51982.2022.9893688","DOIUrl":"https://doi.org/10.1109/CoG51982.2022.9893688","url":null,"abstract":"We propose a novel single image 3D face reconstruction method for realistic in-game avatar auto-creation. Although some existing 3D face reconstruction methods have been able to generate good geometry, there are still some shortages in texture generation, especially diffuse prediction, which limits its application in games or other scenarios. The main problems of these methods include: the details in the photo are not accurately restored, the produced diffuse is over smoothed, or the occlusion and lighting are not correctly removed, and so on. Although some methods collect high-quality 3D face data for neural networks to learn to generate realistic 3D faces, collecting 3D face data is known expensive. To address the above problems, we propose to utilize data from three sources, including single face images, manually inpainted diffuse maps paired with face portraits, and multiple photos of single IDs generated by a pretrained network. To make full use of these data, we propose a three-pathway network architecture that takes face images as input, produces diffuse maps, normal maps, as well as pose and light coefficients. The network parameters are optimized by comparing the rendered results with the input images, along with some other objective functions.","PeriodicalId":394281,"journal":{"name":"2022 IEEE Conference on Games (CoG)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123500088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Albertus Agung, Roman Savchyn, Pujana Paliyawan, R. Thawonmas
{"title":"Cute Helper: A Study on the Effect of Virtual Character Expressions on Players’ Engagement in a Game for Collecting Artwork Descriptions","authors":"Albertus Agung, Roman Savchyn, Pujana Paliyawan, R. Thawonmas","doi":"10.1109/CoG51982.2022.9893683","DOIUrl":"https://doi.org/10.1109/CoG51982.2022.9893683","url":null,"abstract":"This study implements a virtual character as a moderator in JUSTIN, a game designed and developed for collecting descriptions of ukiyo-e artworks on a live-streaming platform. The game has shown to be effective. However, the repetitive nature and the necessity to play JUSTIN many rounds make its players less interested in continuing the game. Hence, we examine if a virtual character can improve the player’s enjoyment and engagement experience. To conduct a control experiment, we develop a prototype that simulates JUSTIN, but with only one player playing the game at a time, and run an experiment with it. Our preliminary results show that a virtual character that changes its expression according to the game situation is promising in promoting the enjoyment and engagement of JUSTIN players.","PeriodicalId":394281,"journal":{"name":"2022 IEEE Conference on Games (CoG)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116805495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Hünemörder, Mirjam Bayer, Nadine Sarah Schüler, Peer Kröger
{"title":"Stirring the Pot - Teaching Reinforcement Learning Agents a ”Push-Your-Luck” board game","authors":"M. Hünemörder, Mirjam Bayer, Nadine Sarah Schüler, Peer Kröger","doi":"10.1109/CoG51982.2022.9893657","DOIUrl":"https://doi.org/10.1109/CoG51982.2022.9893657","url":null,"abstract":"Recent successes in AI research concerning traditional games like GO, have led to increased interest in the field of reinforcement learning. Modern board game design, however, has risen in complexity. This paper introduces a novel task for reinforcement learning: “Quacks of Quedlinburg”. A modern board game with risk management, deck building, and the option to choose a specific rule set out of thousands of possible combinations for every game. We provide an environment based on the game and perform initial experiments. In these, we found that Deep Q-Learning agents can significantly outperform simple heuristics.","PeriodicalId":394281,"journal":{"name":"2022 IEEE Conference on Games (CoG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130703443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning Strategies for Imperfect Information Board Games Using Depth-Limited Counterfactual Regret Minimization and Belief State","authors":"Chen Chen, Tomoyuki Kaneko","doi":"10.1109/CoG51982.2022.9893713","DOIUrl":"https://doi.org/10.1109/CoG51982.2022.9893713","url":null,"abstract":"Counterfactual Regret Minimization (CFR) variants have mastered many Poker games by effectively handling a large number of opportunities in private information within relatively short playing histories of the game. However, for imperfect information board games with infrequent chance events but long histories or even loops, the effectiveness of CFR is often limited in practice as the computational complexity grows exponentially with the game length. In this paper, we propose Belief States with Approximation by Dirichlet Distributions and Depth-limited External Sampling for Board Games that enables an effective abstraction even with existence of loops. Experiments show that our proposed methods have the ability to learn reasonable strategies.","PeriodicalId":394281,"journal":{"name":"2022 IEEE Conference on Games (CoG)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127870828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}