Gilzamir Gomes, C. Vidal, J. B. C. Neto, Y. L. Nogueira
{"title":"AI4U: A Tool for Game Reinforcement Learning Experiments","authors":"Gilzamir Gomes, C. Vidal, J. B. C. Neto, Y. L. Nogueira","doi":"10.1109/SBGames51465.2020.00014","DOIUrl":null,"url":null,"abstract":"Reinforcement Learning is a promising approach to the design of Non-Player Characters (NPCs). It is challenging, however, to design games enabled to support reinforcement learning because, in addition to specifying the environment and the agent that controls the character, there is the challenge of modeling a significant reward function for the expected behavior from a virtual character. To alleviate the challenges of this problem, we have developed a tool that allows one to specify, in an integrated way, the environment, the agent, and the reward functions. The tool provides a visual and declarative specification of the environment, providing a graphic language consistent with game events. Besides, it supports the specification of non-Markovian reward functions and is integrated with a game development platform that makes it possible to specify complex and interesting environments. An environment modeled with this tool supports the implementation of most current state-of-the-art reinforcement learning algorithms, such as Proximal Policy Optimization and Soft Actor-Critic algorithms. The objective of the developed tool is to facilitate the experimentation of learning in games, taking advantage of the existing ecosystem around modern game development platforms. Applications developed with the support of this tool show the potential for specifying game environments to experiment with reinforcement learning algorithms.","PeriodicalId":335816,"journal":{"name":"2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SBGames51465.2020.00014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Reinforcement Learning is a promising approach to the design of Non-Player Characters (NPCs). It is challenging, however, to design games enabled to support reinforcement learning because, in addition to specifying the environment and the agent that controls the character, there is the challenge of modeling a significant reward function for the expected behavior from a virtual character. To alleviate the challenges of this problem, we have developed a tool that allows one to specify, in an integrated way, the environment, the agent, and the reward functions. The tool provides a visual and declarative specification of the environment, providing a graphic language consistent with game events. Besides, it supports the specification of non-Markovian reward functions and is integrated with a game development platform that makes it possible to specify complex and interesting environments. An environment modeled with this tool supports the implementation of most current state-of-the-art reinforcement learning algorithms, such as Proximal Policy Optimization and Soft Actor-Critic algorithms. The objective of the developed tool is to facilitate the experimentation of learning in games, taking advantage of the existing ecosystem around modern game development platforms. Applications developed with the support of this tool show the potential for specifying game environments to experiment with reinforcement learning algorithms.