具有截短历史的深度递归q网络

Hyunwoo Oh, Tomoyuki Kaneko
{"title":"具有截短历史的深度递归q网络","authors":"Hyunwoo Oh, Tomoyuki Kaneko","doi":"10.1109/TAAI.2018.00017","DOIUrl":null,"url":null,"abstract":"Reinforcement Learning is a kind of machine learning method which learns through agents' interaction with the environment. Deep Q-Network (DQN), which is a model of reinforcement learning based on deep neural networks, succeeded in learning human-level control policies on various kinds of Atari 2600 games with image pixel inputs. Because an input of DQN is the game frames of the last four steps, DQN had difficulty on mastering such games that need to remember events earlier than four steps in the past. To alleviate the problem, Deep Recurrent Q-Network (DRQN) and Deep Attention Recurrent Q-Network (DARQN) were proposed. In DRQN, the first fully-connected layer just after convolutional layers is replaced with an LSTM to incorporate past information. DARQN is a model with visual attention mechanisms on top of DRQN. We propose two new reinforcement learning models: Deep Recurrent Q-Network with Truncated History (T-DRQN) and Deep Attention Recurrent Q-Network with Truncated History (T-DARQN). T-DRQN uses a truncated history so that we can control the length of history to be considered. T-DARQN is a model with visual attention mechanism on top of T-DRQN. Experiments of our models on six games of Atari 2600 shows a level of performance between DQN and D(A) RQN. Furthermore, results show the necessity of using past information with a truncated length, rather than using only the current information or all of the past information.","PeriodicalId":211734,"journal":{"name":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Deep Recurrent Q-Network with Truncated History\",\"authors\":\"Hyunwoo Oh, Tomoyuki Kaneko\",\"doi\":\"10.1109/TAAI.2018.00017\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Reinforcement Learning is a kind of machine learning method which learns through agents' interaction with the environment. Deep Q-Network (DQN), which is a model of reinforcement learning based on deep neural networks, succeeded in learning human-level control policies on various kinds of Atari 2600 games with image pixel inputs. Because an input of DQN is the game frames of the last four steps, DQN had difficulty on mastering such games that need to remember events earlier than four steps in the past. To alleviate the problem, Deep Recurrent Q-Network (DRQN) and Deep Attention Recurrent Q-Network (DARQN) were proposed. In DRQN, the first fully-connected layer just after convolutional layers is replaced with an LSTM to incorporate past information. DARQN is a model with visual attention mechanisms on top of DRQN. We propose two new reinforcement learning models: Deep Recurrent Q-Network with Truncated History (T-DRQN) and Deep Attention Recurrent Q-Network with Truncated History (T-DARQN). T-DRQN uses a truncated history so that we can control the length of history to be considered. T-DARQN is a model with visual attention mechanism on top of T-DRQN. Experiments of our models on six games of Atari 2600 shows a level of performance between DQN and D(A) RQN. Furthermore, results show the necessity of using past information with a truncated length, rather than using only the current information or all of the past information.\",\"PeriodicalId\":211734,\"journal\":{\"name\":\"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TAAI.2018.00017\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TAAI.2018.00017","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

强化学习是一种通过智能体与环境的相互作用进行学习的机器学习方法。Deep Q-Network (DQN)是一种基于深度神经网络的强化学习模型,它成功地在带有图像像素输入的各种Atari 2600游戏上学习了人类级别的控制策略。因为DQN的输入是最后四个步骤的游戏框架,所以DQN很难掌握这种需要记住过去四个步骤之前的事件的游戏。为了解决这一问题,提出了深度递归q网络(DRQN)和深度注意递归q网络(DARQN)。在DRQN中,卷积层之后的第一个完全连接层被LSTM取代,以合并过去的信息。DARQN是基于DRQN的视觉注意机制模型。我们提出了两种新的强化学习模型:截断历史的深度递归q网络(T-DRQN)和截断历史的深度注意递归q网络(T-DARQN)。T-DRQN使用截断的历史记录,这样我们就可以控制要考虑的历史记录的长度。T-DARQN是在T-DRQN基础上建立视觉注意机制的模型。我们的模型在Atari 2600的6款游戏上的实验显示了介于DQN和D(a) RQN之间的性能水平。此外,结果表明,有必要使用截断长度的过去信息,而不是仅使用当前信息或所有过去信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deep Recurrent Q-Network with Truncated History
Reinforcement Learning is a kind of machine learning method which learns through agents' interaction with the environment. Deep Q-Network (DQN), which is a model of reinforcement learning based on deep neural networks, succeeded in learning human-level control policies on various kinds of Atari 2600 games with image pixel inputs. Because an input of DQN is the game frames of the last four steps, DQN had difficulty on mastering such games that need to remember events earlier than four steps in the past. To alleviate the problem, Deep Recurrent Q-Network (DRQN) and Deep Attention Recurrent Q-Network (DARQN) were proposed. In DRQN, the first fully-connected layer just after convolutional layers is replaced with an LSTM to incorporate past information. DARQN is a model with visual attention mechanisms on top of DRQN. We propose two new reinforcement learning models: Deep Recurrent Q-Network with Truncated History (T-DRQN) and Deep Attention Recurrent Q-Network with Truncated History (T-DARQN). T-DRQN uses a truncated history so that we can control the length of history to be considered. T-DARQN is a model with visual attention mechanism on top of T-DRQN. Experiments of our models on six games of Atari 2600 shows a level of performance between DQN and D(A) RQN. Furthermore, results show the necessity of using past information with a truncated length, rather than using only the current information or all of the past information.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信