Fast Rates for the Regret of Offline Reinforcement Learning

IF 16.4 1区 化学 Q1 CHEMISTRY, MULTIDISCIPLINARY
Yichun Hu, Nathan Kallus, Masatoshi Uehara
{"title":"Fast Rates for the Regret of Offline Reinforcement Learning","authors":"Yichun Hu, Nathan Kallus, Masatoshi Uehara","doi":"10.1287/moor.2021.0167","DOIUrl":null,"url":null,"abstract":"We study the regret of offline reinforcement learning in an infinite-horizon discounted Markov decision process (MDP). While existing analyses of common approaches, such as fitted Q-iteration (FQI), suggest root-n convergence for regret, empirical behavior exhibits much faster convergence. In this paper, we present a finer regret analysis that exactly characterizes this phenomenon by providing fast rates for the regret convergence. First, we show that given any estimate for the optimal quality function, the regret of the policy it defines converges at a rate given by the exponentiation of the estimate’s pointwise convergence rate, thus speeding up the rate. The level of exponentiation depends on the level of noise in the decision-making problem, rather than the estimation problem. We establish such noise levels for linear and tabular MDPs as examples. Second, we provide new analyses of FQI and Bellman residual minimization to establish the correct pointwise convergence guarantees. As specific cases, our results imply one-over-n rates in linear cases and exponential-in-n rates in tabular cases. We extend our findings to general function approximation by extending our results to regret guarantees based on L<jats:sub>p</jats:sub>-convergence rates for estimating the optimal quality function rather than pointwise rates, where L<jats:sub>2</jats:sub> guarantees for nonparametric estimation can be ensured under mild conditions.Funding: This work was supported by the Division of Information and Intelligent Systems, National Science Foundation [Grant 1846210].","PeriodicalId":1,"journal":{"name":"Accounts of Chemical Research","volume":null,"pages":null},"PeriodicalIF":16.4000,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Accounts of Chemical Research","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1287/moor.2021.0167","RegionNum":1,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

We study the regret of offline reinforcement learning in an infinite-horizon discounted Markov decision process (MDP). While existing analyses of common approaches, such as fitted Q-iteration (FQI), suggest root-n convergence for regret, empirical behavior exhibits much faster convergence. In this paper, we present a finer regret analysis that exactly characterizes this phenomenon by providing fast rates for the regret convergence. First, we show that given any estimate for the optimal quality function, the regret of the policy it defines converges at a rate given by the exponentiation of the estimate’s pointwise convergence rate, thus speeding up the rate. The level of exponentiation depends on the level of noise in the decision-making problem, rather than the estimation problem. We establish such noise levels for linear and tabular MDPs as examples. Second, we provide new analyses of FQI and Bellman residual minimization to establish the correct pointwise convergence guarantees. As specific cases, our results imply one-over-n rates in linear cases and exponential-in-n rates in tabular cases. We extend our findings to general function approximation by extending our results to regret guarantees based on Lp-convergence rates for estimating the optimal quality function rather than pointwise rates, where L2 guarantees for nonparametric estimation can be ensured under mild conditions.Funding: This work was supported by the Division of Information and Intelligent Systems, National Science Foundation [Grant 1846210].
离线强化学习的快速回归率
我们研究了无限视距贴现马尔可夫决策过程(MDP)中离线强化学习的遗憾。虽然现有的常用方法(如拟合 Q-iteration (FQI))的分析表明,遗憾的收敛速度为根-n,但经验行为却表现出更快的收敛速度。在本文中,我们提出了一种更精细的后悔分析,通过提供后悔收敛的快速率来准确描述这一现象。首先,我们证明,在给定任何最优质量函数估计值的情况下,其定义的策略的后悔值收敛速度由估计值的点式收敛速度的指数化给出,从而加快了收敛速度。指数化程度取决于决策问题而非估计问题中的噪声水平。我们以线性和表格 MDP 为例,确定了这种噪声水平。其次,我们对 FQI 和贝尔曼残差最小化进行了新的分析,以建立正确的点式收敛保证。在具体案例中,我们的结果意味着线性案例中的收敛率为 1-over-n ,表格案例中的收敛率为指数-n。我们将研究结果扩展到一般函数近似,根据估计最优质量函数的 Lp收敛率而不是点收敛率来保证遗憾,在温和条件下可以确保非参数估计的 L2 保证:这项工作得到了美国国家科学基金会信息与智能系统部[1846210 号资助]的支持。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Accounts of Chemical Research
Accounts of Chemical Research 化学-化学综合
CiteScore
31.40
自引率
1.10%
发文量
312
审稿时长
2 months
期刊介绍: Accounts of Chemical Research presents short, concise and critical articles offering easy-to-read overviews of basic research and applications in all areas of chemistry and biochemistry. These short reviews focus on research from the author’s own laboratory and are designed to teach the reader about a research project. In addition, Accounts of Chemical Research publishes commentaries that give an informed opinion on a current research problem. Special Issues online are devoted to a single topic of unusual activity and significance. Accounts of Chemical Research replaces the traditional article abstract with an article "Conspectus." These entries synopsize the research affording the reader a closer look at the content and significance of an article. Through this provision of a more detailed description of the article contents, the Conspectus enhances the article's discoverability by search engines and the exposure for the research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信