An Approach to Optimize Replay Buffer in Value-Based Reinforcement Learning

Baicheng Chen, Tianhan Gao, Qingwei Mi
{"title":"An Approach to Optimize Replay Buffer in Value-Based Reinforcement Learning","authors":"Baicheng Chen, Tianhan Gao, Qingwei Mi","doi":"10.1109/SoSE59841.2023.10178657","DOIUrl":null,"url":null,"abstract":"Reinforcement Learning (RL) has seen numerous advancements in recent years, particularly in the area of value-based algorithms. A key component of these algorithms is the Replay Buffer, which stores past experiences to improve learning. In this paper, the authors explore an optimization method for the Replay Buffer that increases the learning efficiency of an agent by prioritizing experiences based on their training value (T). The authors test the proposed approach in two environments, a maze and Cartpole-v1, comparing it to traditional Q-learning and Deep Q-Networks (DQN) algorithms. The results demonstrate improvements in learning efficiency and training effects, showing potential for the application of the method in various RL scenarios.","PeriodicalId":181642,"journal":{"name":"2023 18th Annual System of Systems Engineering Conference (SoSe)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 18th Annual System of Systems Engineering Conference (SoSe)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SoSE59841.2023.10178657","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Reinforcement Learning (RL) has seen numerous advancements in recent years, particularly in the area of value-based algorithms. A key component of these algorithms is the Replay Buffer, which stores past experiences to improve learning. In this paper, the authors explore an optimization method for the Replay Buffer that increases the learning efficiency of an agent by prioritizing experiences based on their training value (T). The authors test the proposed approach in two environments, a maze and Cartpole-v1, comparing it to traditional Q-learning and Deep Q-Networks (DQN) algorithms. The results demonstrate improvements in learning efficiency and training effects, showing potential for the application of the method in various RL scenarios.
基于值的强化学习中重播缓冲区的优化方法
近年来,强化学习(RL)取得了许多进步,特别是在基于值的算法领域。这些算法的一个关键组成部分是回放缓冲区,它存储过去的经验以提高学习。在本文中,作者探索了一种Replay Buffer的优化方法,该方法通过基于训练值(T)对经验进行优先排序来提高智能体的学习效率。作者在迷宫和Cartpole-v1两种环境中测试了所提出的方法,并将其与传统的q -学习和深度Q-Networks (DQN)算法进行了比较。结果表明,该方法提高了学习效率和训练效果,显示了该方法在各种强化学习场景中的应用潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信