P. Serafim, Y. L. Nogueira, C. Vidal, J. B. C. Neto, R. F. Filho
{"title":"研究深度Q-Network代理对FPS游戏纹理变化的敏感性","authors":"P. Serafim, Y. L. Nogueira, C. Vidal, J. B. C. Neto, R. F. Filho","doi":"10.1109/SBGames51465.2020.00025","DOIUrl":null,"url":null,"abstract":"Graphical updates are very common in modern digital games. For instance, PC game versions usually receive higher resolution textures after some time. This could be a problem for autonomous agents trained to play a game using Convolutional Neural Networks. These agents use the pixels of the screen as inputs and changing them could harm their performance. In this work, we evaluate agents' sensibility to texture changes. The agents are trained to play a First-Person Shooter game and then are presented to different versions of the same scenario, in which the only difference among them is texture changes. As the testbed, we use a ViZDoom scenario with a static monster that should be killed by the agent. Four agents are trained using Deep Q-Networks in four different scenarios. Then, every agent is tested in all four scenarios. We show that although every agent can learn the behaviors to win the game when playing the same version in which it was trained, they cannot generalize to all other versions. Only in one case, the agent had a good performance in a different scenario. Most of the time, the agent moved randomly or just stood still, and shot continuously, indicating that it could not understand the current screen. Even when the background textures were kept the same, the agent could not identify the enemy. Thus, to ensure proper behavior, an agent needs to be retrained not only if the problem changes, but also when only the visual aspects of the problem are modified.","PeriodicalId":335816,"journal":{"name":"2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Investigating Deep Q-Network Agent Sensibility to Texture Changes on FPS Games\",\"authors\":\"P. Serafim, Y. L. Nogueira, C. Vidal, J. B. C. Neto, R. F. Filho\",\"doi\":\"10.1109/SBGames51465.2020.00025\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Graphical updates are very common in modern digital games. For instance, PC game versions usually receive higher resolution textures after some time. This could be a problem for autonomous agents trained to play a game using Convolutional Neural Networks. These agents use the pixels of the screen as inputs and changing them could harm their performance. In this work, we evaluate agents' sensibility to texture changes. The agents are trained to play a First-Person Shooter game and then are presented to different versions of the same scenario, in which the only difference among them is texture changes. As the testbed, we use a ViZDoom scenario with a static monster that should be killed by the agent. Four agents are trained using Deep Q-Networks in four different scenarios. Then, every agent is tested in all four scenarios. We show that although every agent can learn the behaviors to win the game when playing the same version in which it was trained, they cannot generalize to all other versions. Only in one case, the agent had a good performance in a different scenario. Most of the time, the agent moved randomly or just stood still, and shot continuously, indicating that it could not understand the current screen. Even when the background textures were kept the same, the agent could not identify the enemy. Thus, to ensure proper behavior, an agent needs to be retrained not only if the problem changes, but also when only the visual aspects of the problem are modified.\",\"PeriodicalId\":335816,\"journal\":{\"name\":\"2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)\",\"volume\":\"42 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SBGames51465.2020.00025\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SBGames51465.2020.00025","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Investigating Deep Q-Network Agent Sensibility to Texture Changes on FPS Games
Graphical updates are very common in modern digital games. For instance, PC game versions usually receive higher resolution textures after some time. This could be a problem for autonomous agents trained to play a game using Convolutional Neural Networks. These agents use the pixels of the screen as inputs and changing them could harm their performance. In this work, we evaluate agents' sensibility to texture changes. The agents are trained to play a First-Person Shooter game and then are presented to different versions of the same scenario, in which the only difference among them is texture changes. As the testbed, we use a ViZDoom scenario with a static monster that should be killed by the agent. Four agents are trained using Deep Q-Networks in four different scenarios. Then, every agent is tested in all four scenarios. We show that although every agent can learn the behaviors to win the game when playing the same version in which it was trained, they cannot generalize to all other versions. Only in one case, the agent had a good performance in a different scenario. Most of the time, the agent moved randomly or just stood still, and shot continuously, indicating that it could not understand the current screen. Even when the background textures were kept the same, the agent could not identify the enemy. Thus, to ensure proper behavior, an agent needs to be retrained not only if the problem changes, but also when only the visual aspects of the problem are modified.