2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)最新文献

筛选
英文 中文
Applying Hidden Markov Model for Dynamic Game Balancing 隐马尔可夫模型在动态博弈平衡中的应用
2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames) Pub Date : 2020-11-01 DOI: 10.1109/SBGames51465.2020.00016
M. Zamith, José Ricardo da Silva, E. Clua, M. Joselli
{"title":"Applying Hidden Markov Model for Dynamic Game Balancing","authors":"M. Zamith, José Ricardo da Silva, E. Clua, M. Joselli","doi":"10.1109/SBGames51465.2020.00016","DOIUrl":"https://doi.org/10.1109/SBGames51465.2020.00016","url":null,"abstract":"In Artificial Intelligence (AI) field, Machine Learning (ML) techniques present an interesting approach for games, where it allows some sort of adaptation along the game session. This adaptation can make games more attractive, avoiding that Non-Player-Characters (NPC) present too easy or hard patterns during the game. In both cases, the player may be frustrated due to undesired experience. Although ML techniques are appealing to be used in games, some games characteristics are hard to model. Besides, there are techniques that require a wide variety of observations, which implies two hard barriers for game application: the first is the power processing to compute a huge amount of data in games, considering the real-time characteristic of this kind of application. The second threat is related to the vast majority of games' attributes that must be described in the model. This work proposes a novel approach using ML technique based on Hidden Markov Model (HMM) for game balancing process. HMM is a powerful technique which can be used to learn patterns based on a strong co-relational between an observation and an unknown variable (the hidden part). Our proposed approach learns the player's pattern based on temporal frame observation by co-relating his/her actions (movements) with game events (NPC destruction). The temporal frame observation approach allows the game to learn about player's pattern even if a different person plays it. After the learning process, the following step is to use the knowledge pattern to adapt the game according to the current player, which normally involves making the game harder for a certain period of time. During this time, another pattern may arise, subjected to be learned. In order to validate the presented approach, a Space Invaders clone has been built, allowing to observe that 54 % of participants had more fun while playing it with ML activated in relation to a base version that did not take into account dynamic difficult balancing.","PeriodicalId":335816,"journal":{"name":"2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)","volume":"88 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126027423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
AI4U: A Tool for Game Reinforcement Learning Experiments AI4U:游戏强化学习实验的工具
2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames) Pub Date : 2020-11-01 DOI: 10.1109/SBGames51465.2020.00014
Gilzamir Gomes, C. Vidal, J. B. C. Neto, Y. L. Nogueira
{"title":"AI4U: A Tool for Game Reinforcement Learning Experiments","authors":"Gilzamir Gomes, C. Vidal, J. B. C. Neto, Y. L. Nogueira","doi":"10.1109/SBGames51465.2020.00014","DOIUrl":"https://doi.org/10.1109/SBGames51465.2020.00014","url":null,"abstract":"Reinforcement Learning is a promising approach to the design of Non-Player Characters (NPCs). It is challenging, however, to design games enabled to support reinforcement learning because, in addition to specifying the environment and the agent that controls the character, there is the challenge of modeling a significant reward function for the expected behavior from a virtual character. To alleviate the challenges of this problem, we have developed a tool that allows one to specify, in an integrated way, the environment, the agent, and the reward functions. The tool provides a visual and declarative specification of the environment, providing a graphic language consistent with game events. Besides, it supports the specification of non-Markovian reward functions and is integrated with a game development platform that makes it possible to specify complex and interesting environments. An environment modeled with this tool supports the implementation of most current state-of-the-art reinforcement learning algorithms, such as Proximal Policy Optimization and Soft Actor-Critic algorithms. The objective of the developed tool is to facilitate the experimentation of learning in games, taking advantage of the existing ecosystem around modern game development platforms. Applications developed with the support of this tool show the potential for specifying game environments to experiment with reinforcement learning algorithms.","PeriodicalId":335816,"journal":{"name":"2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124716146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards the Design of Adaptive Virtual Reality Horror Games: A Model of Players' Fears Using Machine Learning and Player Modeling 面向自适应虚拟现实恐怖游戏的设计:使用机器学习和玩家建模的玩家恐惧模型
2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames) Pub Date : 2020-11-01 DOI: 10.1109/SBGames51465.2020.00031
E. S. D. Lima, Bruno M. C. Silva, Gabriel Teixeira Galam
{"title":"Towards the Design of Adaptive Virtual Reality Horror Games: A Model of Players' Fears Using Machine Learning and Player Modeling","authors":"E. S. D. Lima, Bruno M. C. Silva, Gabriel Teixeira Galam","doi":"10.1109/SBGames51465.2020.00031","DOIUrl":"https://doi.org/10.1109/SBGames51465.2020.00031","url":null,"abstract":"Horror games are designed to induce fear in players. Although fundamental fears, such as the unknown, are inherent to the human being, more specific fears, such as darkness and apparitions, are individual and can vary from person to person. When a game aims at intensifying the fear evoked in individual players, having useful information about the fears of the current player is vital to promote more frightening experiences. This paper explores fear modeling and presents a new method to identify what players fear in a virtual reality horror game. The proposed method uses machine learning and player modeling techniques to create a model of players' fears, which can be used to adapt in-game horror elements to intensify the fear evoked in players. The paper presents the proposed method and evaluates its accuracy and real-time performance in a virtual reality horror game.","PeriodicalId":335816,"journal":{"name":"2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)","volume":"237 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134254359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Investigating Case Learning Techniques for Agents to Play the Card Game of Truco 基于案例学习技术的智能体Truco纸牌游戏研究
2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames) Pub Date : 2020-11-01 DOI: 10.1109/SBGames51465.2020.00024
Ruan C. B. Moral, G. B. Paulus, J. Assunção, L. A. L. Silva
{"title":"Investigating Case Learning Techniques for Agents to Play the Card Game of Truco","authors":"Ruan C. B. Moral, G. B. Paulus, J. Assunção, L. A. L. Silva","doi":"10.1109/SBGames51465.2020.00024","DOIUrl":"https://doi.org/10.1109/SBGames51465.2020.00024","url":null,"abstract":"Truco is a popular game in many regions of South America; however, unlike worldwide games, Truco still requires a competitive Artificial Intelligence. Due to the limited availability of Truco data and the stochastic and imperfect information characteristics of the game, creating competitive models for a card game like Truco is a challenging task. To approach this problem, this work investigates the generation of concrete Truco problem-solving experiences through alternative techniques of automatic case generation and active learning, aiming to learn with the retention of cases in case bases. From this, these case bases guide the playing actions of the implemented Truco bots permitting to assess the capabilities of each bot, all implemented with Case-Based Reasoning (CBR) techniques.","PeriodicalId":335816,"journal":{"name":"2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115765424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Drafting in Collectible Card Games via Reinforcement Learning 基于强化学习的收集卡牌游戏制图
2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames) Pub Date : 2020-11-01 DOI: 10.1109/SBGames51465.2020.00018
R. Vieira, A. Tavares, L. Chaimowicz
{"title":"Drafting in Collectible Card Games via Reinforcement Learning","authors":"R. Vieira, A. Tavares, L. Chaimowicz","doi":"10.1109/SBGames51465.2020.00018","DOIUrl":"https://doi.org/10.1109/SBGames51465.2020.00018","url":null,"abstract":"Collectible card games are played by tens of millions of players worldwide. Their intricate rules and diverse cards make them much harder than traditional card games. To win, players must be proficient in two interdependent tasks: deck building and battling. In this paper, we present a deep reinforcement learning approach for deck building in arena mode - an understudied game mode present in many collectible card games. In arena, the players build decks immediately before battling by drafting one card at a time from randomly presented candidates. We investigate three variants of the approach and perform experiments on Legends of Code and Magic, a collectible card game designed for AI research. Results show that our learned draft strategies outperform those of the best agents of the game. Moreover, a participant of the Strategy Card Game AI competition improves from tenth to fourth place when coupled with our best draft agent.","PeriodicalId":335816,"journal":{"name":"2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)","volume":"24 40","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132275092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An Intelligent Agent Playing Generic Action Games based on Deep Reinforcement Learning with Memory Restrictions 基于记忆限制下深度强化学习的通用动作游戏智能体
2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames) Pub Date : 2020-11-01 DOI: 10.1109/SBGames51465.2020.00015
Lucas Antunes de Almeida, M. Thielo
{"title":"An Intelligent Agent Playing Generic Action Games based on Deep Reinforcement Learning with Memory Restrictions","authors":"Lucas Antunes de Almeida, M. Thielo","doi":"10.1109/SBGames51465.2020.00015","DOIUrl":"https://doi.org/10.1109/SBGames51465.2020.00015","url":null,"abstract":"Among the topics that increasingly gained special attention in Computer Science recently, the evolution of Artificial Intelligence has been one of the most prominent subjects, especially when related to games. In this work we developed an intelligent agent with memory restrictions so to investigate its ability to learn playing multiple, different games without the need of being provided with specific details for each of the games. As a measure of quality of the agent, we used the difference between its score and the scores obtained by casual human players. Aiming to address the possibilities of using Deep Learning for General Game Playing in less powerful devices, we explicitly limited the amount of memory available for the agent, apart from the commonly used physical memory limit for most works in the area. For the abstraction of machine learning and image processing stages, we used the Keras and Gym libraries. As a result, we obtained an agent capable of playing multiple games without the need to provide rules in advance, but receiving at each moment only the game video frame, the current score and whether the current state represents an endgame. To assess the agent effectiveness, we submitted it to a set of Atari 2600™ games, where the scores obtained were compared to casual human players and discussed. In the conclusion, we show that promising results were obtained for these games even with memory limitations and finally a few improvements are proposed.","PeriodicalId":335816,"journal":{"name":"2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128303541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
URNAI: A Multi-Game Toolkit for Experimenting Deep Reinforcement Learning Algorithms URNAI:一个用于实验深度强化学习算法的多游戏工具包
2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames) Pub Date : 2020-11-01 DOI: 10.1109/SBGames51465.2020.00032
Marco A. S. Araùjo, L. P. Alves, C. Madeira, Marcos M. Nóbrega
{"title":"URNAI: A Multi-Game Toolkit for Experimenting Deep Reinforcement Learning Algorithms","authors":"Marco A. S. Araùjo, L. P. Alves, C. Madeira, Marcos M. Nóbrega","doi":"10.1109/SBGames51465.2020.00032","DOIUrl":"https://doi.org/10.1109/SBGames51465.2020.00032","url":null,"abstract":"In the last decade, several game environments have been popularized as testbeds for experimenting reinforcement learning algorithms, an area of research that has shown great potential for artificial intelligence based solutions. These game environments range from the simplest ones like CartPole to the most complex ones such as StarCraft II. However, in order to experiment an algorithm in each of these environments, researchers need to prepare all the settings for each one, a task that is very time consuming since it entails integrating the game environment to their software and treating the game environment variables. So, this paper introduces URNAI, a new multi-game toolkit that enables researchers to easily experiment with deep reinforcement learning algorithms in several game environments. To do this, URNAI implements layers that integrate existing reinforcement learning libraries and existing game environments, simplifying the setup and management of several reinforcement learning components, such as algorithms, state spaces, action spaces, reward functions, and so on. Moreover, URNAI provides a framework prepared for GPU supercomputing, which allows much faster experiment cycles. The first toolkit results are very promising.","PeriodicalId":335816,"journal":{"name":"2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130442195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of an Autonomous Agent based on Reinforcement Learning for a Digital Fighting Game 基于强化学习的数字格斗游戏自主代理的开发
2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames) Pub Date : 2020-11-01 DOI: 10.1109/SBGames51465.2020.00017
J. R. Bezerra, L. F. Góes, Alysson Ribeiro Da Silva
{"title":"Development of an Autonomous Agent based on Reinforcement Learning for a Digital Fighting Game","authors":"J. R. Bezerra, L. F. Góes, Alysson Ribeiro Da Silva","doi":"10.1109/SBGames51465.2020.00017","DOIUrl":"https://doi.org/10.1109/SBGames51465.2020.00017","url":null,"abstract":"In this work, an autonomous agent based on reinforcement learning is implemented in a digital fighting game. The implemented agent uses Fusion Architecture for Learning, COgnition, and Navigation (FALCON) and Associative Resonance Map (ARAM) neural networks. The experimental results show that the autonomous agent is able to develop game strategies using the experience acquired in the matches, and achieves a winning rate of up to 90% against an agent with fixed behavior.","PeriodicalId":335816,"journal":{"name":"2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114979541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Baseline Approach for Goalkeeper Strategy using Sarsa with Tile Coding on the Half Field Offense Environment 半场进攻环境下基于Sarsa的守门员策略基线分析
2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames) Pub Date : 2020-11-01 DOI: 10.1109/SBGames51465.2020.00012
V. G. F. Barbosa, R. Neto, Roberto V. L. Gomes Rodrigues
{"title":"A Baseline Approach for Goalkeeper Strategy using Sarsa with Tile Coding on the Half Field Offense Environment","authors":"V. G. F. Barbosa, R. Neto, Roberto V. L. Gomes Rodrigues","doi":"10.1109/SBGames51465.2020.00012","DOIUrl":"https://doi.org/10.1109/SBGames51465.2020.00012","url":null,"abstract":"Much research in RoboCup 2D Soccer Simulation has used the Half Field Offense (HFO) environment. This work proposes a baseline approach for goalkeeper strategy using Reinforcement Learning on HFO. The proposed approach uses Sarsa with eligibility traces and Tile Coding for the discretization of state variables. Two comparative studies were conducted to validate the proposed baseline. First, a comparative study between the Agent2D's goalkeeper strategy and a random decision strategy was performed. The second comparative study verified the performance of the proposed approach against a random decision strategy. Wilcoxon's Signed-Rank test was used for measuring the statistical significance of performance differences. Experiments showed that the Agent2D's goalkeeper strategy is inferior to a random decision, and the proposed baseline delivers a performance superior to a random decision strategy with a confidence level of 95%.","PeriodicalId":335816,"journal":{"name":"2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132577435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
[Title page] (标题页)
2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames) Pub Date : 2020-11-01 DOI: 10.1109/sbgames51465.2020.00001
{"title":"[Title page]","authors":"","doi":"10.1109/sbgames51465.2020.00001","DOIUrl":"https://doi.org/10.1109/sbgames51465.2020.00001","url":null,"abstract":"","PeriodicalId":335816,"journal":{"name":"2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116297087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信