Nicolas Diekmann, T. Walther, Sandhiya Vijayabaskaran, Sen Cheng
{"title":"Deep reinforcement learning in a spatial navigation task: Multiple contexts and their representation","authors":"Nicolas Diekmann, T. Walther, Sandhiya Vijayabaskaran, Sen Cheng","doi":"10.32470/ccn.2019.1151-0","DOIUrl":null,"url":null,"abstract":"Deep learning has recently been combined with Qlearning (Mnih et al., 2015) to enable learning difficult tasks such as playing video games based only on visual input. Stable learning in the in the deep Q network (DQN) is facilitated by the use of memory replay, which means that previous experiences are stored and sampled from during an offline learning period. We evaluate the DQN’s ability to learn and retain multiple variations of a spatial navigation task in a virtual environment. Task variations are presented in visually distinct contexts by varying light conditions and environmental textures. Replay memory capacity is varied to measure its effect on task retention. The representations of multiple contexts learned by the DQN agents are analyzed and compared. We show that DQN agents learn a preference for common actions early on, irrespective of replay memory capacity. A limited replay memory causes agents to confuse state-values. Furthermore, we find that contexts are quickly forgotten as soon as corresponding experiences are no longer available in the replay memory.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Conference on Cognitive Computational Neuroscience","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32470/ccn.2019.1151-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning has recently been combined with Qlearning (Mnih et al., 2015) to enable learning difficult tasks such as playing video games based only on visual input. Stable learning in the in the deep Q network (DQN) is facilitated by the use of memory replay, which means that previous experiences are stored and sampled from during an offline learning period. We evaluate the DQN’s ability to learn and retain multiple variations of a spatial navigation task in a virtual environment. Task variations are presented in visually distinct contexts by varying light conditions and environmental textures. Replay memory capacity is varied to measure its effect on task retention. The representations of multiple contexts learned by the DQN agents are analyzed and compared. We show that DQN agents learn a preference for common actions early on, irrespective of replay memory capacity. A limited replay memory causes agents to confuse state-values. Furthermore, we find that contexts are quickly forgotten as soon as corresponding experiences are no longer available in the replay memory.
深度学习最近与Qlearning (Mnih et al., 2015)相结合,以实现仅基于视觉输入的学习困难任务,例如玩视频游戏。在深度Q网络(DQN)中,使用记忆重放促进了稳定的学习,这意味着在离线学习期间存储和采样以前的经验。我们评估了DQN在虚拟环境中学习和保留多种空间导航任务的能力。任务变化通过不同的光照条件和环境纹理呈现在视觉上不同的语境中。重放记忆容量的变化,以衡量其对任务保留的影响。对DQN智能体学习到的多个上下文表示进行了分析和比较。我们表明,DQN代理在早期学习对常见动作的偏好,而不考虑重放记忆容量。有限的重放内存会导致代理混淆状态值。此外,我们发现,一旦相应的经历在回放记忆中不再可用,上下文就会很快被遗忘。