Advances and challenges in learning from experience replay

IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Daniel Eugênio Neves, Lucila Ishitani, Zenilton Kleber Gonçalves do Patrocínio Júnior
{"title":"Advances and challenges in learning from experience replay","authors":"Daniel Eugênio Neves,&nbsp;Lucila Ishitani,&nbsp;Zenilton Kleber Gonçalves do Patrocínio Júnior","doi":"10.1007/s10462-024-11062-0","DOIUrl":null,"url":null,"abstract":"<div><p>From the first theoretical propositions in the 1950s to its application in real-world problems, Reinforcement Learning (RL) is still a fascinating and complex class of machine learning algorithms with overgrowing literature in recent years. In this work, we present an extensive and structured literature review and discuss how the Experience Replay (ER) technique has been fundamental in making various RL methods in most relevant problems and different domains more data efficient. ER is the central focus of this review. One of its main contributions is a taxonomy that organizes the many research works and the different RL methods that use ER. Here, the focus is on how RL methods improve and apply ER strategies, demonstrating their specificities and contributions while having ER as a prominent component. Another relevant contribution is the organization in a facet-oriented way, allowing different perspectives of reading, whether based on the fundamental problems of RL, focusing on algorithmic strategies and architectural decisions, or with a view to different applications of RL with ER. Moreover, we start by presenting a detailed formal theoretical foundation of RL and some of the most relevant algorithms and bring from the recent literature some of the main trends, challenges, and advances focused on ER formal basement and how to improve its propositions to make it even more efficient in different methods and domains. Lastly, we discuss challenges and open problems and present relevant paths to feature works.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 2","pages":""},"PeriodicalIF":10.7000,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-11062-0.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-024-11062-0","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

From the first theoretical propositions in the 1950s to its application in real-world problems, Reinforcement Learning (RL) is still a fascinating and complex class of machine learning algorithms with overgrowing literature in recent years. In this work, we present an extensive and structured literature review and discuss how the Experience Replay (ER) technique has been fundamental in making various RL methods in most relevant problems and different domains more data efficient. ER is the central focus of this review. One of its main contributions is a taxonomy that organizes the many research works and the different RL methods that use ER. Here, the focus is on how RL methods improve and apply ER strategies, demonstrating their specificities and contributions while having ER as a prominent component. Another relevant contribution is the organization in a facet-oriented way, allowing different perspectives of reading, whether based on the fundamental problems of RL, focusing on algorithmic strategies and architectural decisions, or with a view to different applications of RL with ER. Moreover, we start by presenting a detailed formal theoretical foundation of RL and some of the most relevant algorithms and bring from the recent literature some of the main trends, challenges, and advances focused on ER formal basement and how to improve its propositions to make it even more efficient in different methods and domains. Lastly, we discuss challenges and open problems and present relevant paths to feature works.

从经验回放中学习的进展与挑战
从 20 世纪 50 年代的第一个理论命题到其在现实问题中的应用,强化学习(RL)仍然是一类迷人而复杂的机器学习算法,近年来相关文献不断增加。在这项工作中,我们将进行广泛而有条理的文献综述,并讨论经验重放(ER)技术是如何在大多数相关问题和不同领域中提高各种强化学习方法的数据效率的。ER 是本综述的核心重点。本综述的主要贡献之一是提供了一种分类法,用于组织众多研究工作和使用 ER 的不同 RL 方法。这里的重点是研究 RL 方法如何改进和应用 ER 策略,展示它们的特性和贡献,同时将 ER 作为一个重要组成部分。另一个相关的贡献是以面向方面的方式进行组织,允许从不同的角度进行阅读,无论是基于 RL 的基本问题,重点关注算法策略和架构决策,还是着眼于带有 ER 的 RL 的不同应用。此外,我们首先介绍了 RL 的详细形式理论基础和一些最相关的算法,并从近期文献中引出了一些主要趋势、挑战和进展,重点关注 ER 形式基础以及如何改进其命题,使其在不同方法和领域中更加高效。最后,我们讨论了面临的挑战和有待解决的问题,并提出了特色工作的相关路径。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Artificial Intelligence Review
Artificial Intelligence Review 工程技术-计算机:人工智能
CiteScore
22.00
自引率
3.30%
发文量
194
审稿时长
5.3 months
期刊介绍: Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信