Marco A. S. Araùjo, L. P. Alves, C. Madeira, Marcos M. Nóbrega
{"title":"URNAI: A Multi-Game Toolkit for Experimenting Deep Reinforcement Learning Algorithms","authors":"Marco A. S. Araùjo, L. P. Alves, C. Madeira, Marcos M. Nóbrega","doi":"10.1109/SBGames51465.2020.00032","DOIUrl":null,"url":null,"abstract":"In the last decade, several game environments have been popularized as testbeds for experimenting reinforcement learning algorithms, an area of research that has shown great potential for artificial intelligence based solutions. These game environments range from the simplest ones like CartPole to the most complex ones such as StarCraft II. However, in order to experiment an algorithm in each of these environments, researchers need to prepare all the settings for each one, a task that is very time consuming since it entails integrating the game environment to their software and treating the game environment variables. So, this paper introduces URNAI, a new multi-game toolkit that enables researchers to easily experiment with deep reinforcement learning algorithms in several game environments. To do this, URNAI implements layers that integrate existing reinforcement learning libraries and existing game environments, simplifying the setup and management of several reinforcement learning components, such as algorithms, state spaces, action spaces, reward functions, and so on. Moreover, URNAI provides a framework prepared for GPU supercomputing, which allows much faster experiment cycles. The first toolkit results are very promising.","PeriodicalId":335816,"journal":{"name":"2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SBGames51465.2020.00032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In the last decade, several game environments have been popularized as testbeds for experimenting reinforcement learning algorithms, an area of research that has shown great potential for artificial intelligence based solutions. These game environments range from the simplest ones like CartPole to the most complex ones such as StarCraft II. However, in order to experiment an algorithm in each of these environments, researchers need to prepare all the settings for each one, a task that is very time consuming since it entails integrating the game environment to their software and treating the game environment variables. So, this paper introduces URNAI, a new multi-game toolkit that enables researchers to easily experiment with deep reinforcement learning algorithms in several game environments. To do this, URNAI implements layers that integrate existing reinforcement learning libraries and existing game environments, simplifying the setup and management of several reinforcement learning components, such as algorithms, state spaces, action spaces, reward functions, and so on. Moreover, URNAI provides a framework prepared for GPU supercomputing, which allows much faster experiment cycles. The first toolkit results are very promising.