Rainbow-RND:一种增强了内在好奇心的基于值的算法

Sarah Nait Bahloul, Younes Mahmoudi
{"title":"Rainbow-RND:一种增强了内在好奇心的基于值的算法","authors":"Sarah Nait Bahloul, Younes Mahmoudi","doi":"10.1109/ICISAT54145.2021.9678409","DOIUrl":null,"url":null,"abstract":"Deep Reinforcement Learning (DRL) is, without a doubt, one of the most promising and exciting research area in Artificial Intelligence (AI). Several approaches have been proposed and improved in a short time to solve different problems. To tackle exploration’s issues, we present in our work a new approach based on curiosity to generate intrinsic rewards. These latter are related to complex environments with sparse rewards, such as the notorious Montezuma’s Revenge and an even more recent and complicated environment named Obstacle Tower. This type of environments requires the agent to generalize its knowledge, learn high-level planning and low-level control. The results of our experimentations showed that a value-based algorithm (such as Rainbow) can be successfully used with a curiosity-based exploration approach (such as random network distillation). This combination effectively performs better in the Obstacle Tower environment.","PeriodicalId":112478,"journal":{"name":"2021 International Conference on Information Systems and Advanced Technologies (ICISAT)","volume":"142 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Rainbow-RND: a Value-based Algorithm Augmented with Intrinsic Curiosity\",\"authors\":\"Sarah Nait Bahloul, Younes Mahmoudi\",\"doi\":\"10.1109/ICISAT54145.2021.9678409\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep Reinforcement Learning (DRL) is, without a doubt, one of the most promising and exciting research area in Artificial Intelligence (AI). Several approaches have been proposed and improved in a short time to solve different problems. To tackle exploration’s issues, we present in our work a new approach based on curiosity to generate intrinsic rewards. These latter are related to complex environments with sparse rewards, such as the notorious Montezuma’s Revenge and an even more recent and complicated environment named Obstacle Tower. This type of environments requires the agent to generalize its knowledge, learn high-level planning and low-level control. The results of our experimentations showed that a value-based algorithm (such as Rainbow) can be successfully used with a curiosity-based exploration approach (such as random network distillation). This combination effectively performs better in the Obstacle Tower environment.\",\"PeriodicalId\":112478,\"journal\":{\"name\":\"2021 International Conference on Information Systems and Advanced Technologies (ICISAT)\",\"volume\":\"142 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Information Systems and Advanced Technologies (ICISAT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICISAT54145.2021.9678409\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Information Systems and Advanced Technologies (ICISAT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICISAT54145.2021.9678409","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

毫无疑问,深度强化学习(DRL)是人工智能(AI)中最有前途和最令人兴奋的研究领域之一。人们在短时间内提出并改进了几种方法来解决不同的问题。为了解决探索问题,我们在工作中提出了一种基于好奇心来产生内在奖励的新方法。后者与奖励较少的复杂环境有关,如臭名昭著的《Montezuma’s Revenge》和更近期且更复杂的环境《Obstacle Tower》。这种类型的环境要求智能体泛化其知识,学习高级规划和低级控制。我们的实验结果表明,基于值的算法(如Rainbow)可以成功地与基于好奇心的探索方法(如随机网络蒸馏)一起使用。这种组合在障碍塔环境中表现得更好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Rainbow-RND: a Value-based Algorithm Augmented with Intrinsic Curiosity
Deep Reinforcement Learning (DRL) is, without a doubt, one of the most promising and exciting research area in Artificial Intelligence (AI). Several approaches have been proposed and improved in a short time to solve different problems. To tackle exploration’s issues, we present in our work a new approach based on curiosity to generate intrinsic rewards. These latter are related to complex environments with sparse rewards, such as the notorious Montezuma’s Revenge and an even more recent and complicated environment named Obstacle Tower. This type of environments requires the agent to generalize its knowledge, learn high-level planning and low-level control. The results of our experimentations showed that a value-based algorithm (such as Rainbow) can be successfully used with a curiosity-based exploration approach (such as random network distillation). This combination effectively performs better in the Obstacle Tower environment.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信