{"title":"离线强化学习的快速回归率","authors":"Yichun Hu, Nathan Kallus, Masatoshi Uehara","doi":"10.1287/moor.2021.0167","DOIUrl":null,"url":null,"abstract":"We study the regret of offline reinforcement learning in an infinite-horizon discounted Markov decision process (MDP). While existing analyses of common approaches, such as fitted Q-iteration (FQI), suggest root-n convergence for regret, empirical behavior exhibits much faster convergence. In this paper, we present a finer regret analysis that exactly characterizes this phenomenon by providing fast rates for the regret convergence. First, we show that given any estimate for the optimal quality function, the regret of the policy it defines converges at a rate given by the exponentiation of the estimate’s pointwise convergence rate, thus speeding up the rate. The level of exponentiation depends on the level of noise in the decision-making problem, rather than the estimation problem. We establish such noise levels for linear and tabular MDPs as examples. Second, we provide new analyses of FQI and Bellman residual minimization to establish the correct pointwise convergence guarantees. As specific cases, our results imply one-over-n rates in linear cases and exponential-in-n rates in tabular cases. We extend our findings to general function approximation by extending our results to regret guarantees based on L<jats:sub>p</jats:sub>-convergence rates for estimating the optimal quality function rather than pointwise rates, where L<jats:sub>2</jats:sub> guarantees for nonparametric estimation can be ensured under mild conditions.Funding: This work was supported by the Division of Information and Intelligent Systems, National Science Foundation [Grant 1846210].","PeriodicalId":49852,"journal":{"name":"Mathematics of Operations Research","volume":"47 1","pages":""},"PeriodicalIF":1.4000,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fast Rates for the Regret of Offline Reinforcement Learning\",\"authors\":\"Yichun Hu, Nathan Kallus, Masatoshi Uehara\",\"doi\":\"10.1287/moor.2021.0167\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We study the regret of offline reinforcement learning in an infinite-horizon discounted Markov decision process (MDP). While existing analyses of common approaches, such as fitted Q-iteration (FQI), suggest root-n convergence for regret, empirical behavior exhibits much faster convergence. In this paper, we present a finer regret analysis that exactly characterizes this phenomenon by providing fast rates for the regret convergence. First, we show that given any estimate for the optimal quality function, the regret of the policy it defines converges at a rate given by the exponentiation of the estimate’s pointwise convergence rate, thus speeding up the rate. The level of exponentiation depends on the level of noise in the decision-making problem, rather than the estimation problem. We establish such noise levels for linear and tabular MDPs as examples. Second, we provide new analyses of FQI and Bellman residual minimization to establish the correct pointwise convergence guarantees. As specific cases, our results imply one-over-n rates in linear cases and exponential-in-n rates in tabular cases. We extend our findings to general function approximation by extending our results to regret guarantees based on L<jats:sub>p</jats:sub>-convergence rates for estimating the optimal quality function rather than pointwise rates, where L<jats:sub>2</jats:sub> guarantees for nonparametric estimation can be ensured under mild conditions.Funding: This work was supported by the Division of Information and Intelligent Systems, National Science Foundation [Grant 1846210].\",\"PeriodicalId\":49852,\"journal\":{\"name\":\"Mathematics of Operations Research\",\"volume\":\"47 1\",\"pages\":\"\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2024-03-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Mathematics of Operations Research\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1287/moor.2021.0167\",\"RegionNum\":3,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mathematics of Operations Research","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1287/moor.2021.0167","RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
Fast Rates for the Regret of Offline Reinforcement Learning
We study the regret of offline reinforcement learning in an infinite-horizon discounted Markov decision process (MDP). While existing analyses of common approaches, such as fitted Q-iteration (FQI), suggest root-n convergence for regret, empirical behavior exhibits much faster convergence. In this paper, we present a finer regret analysis that exactly characterizes this phenomenon by providing fast rates for the regret convergence. First, we show that given any estimate for the optimal quality function, the regret of the policy it defines converges at a rate given by the exponentiation of the estimate’s pointwise convergence rate, thus speeding up the rate. The level of exponentiation depends on the level of noise in the decision-making problem, rather than the estimation problem. We establish such noise levels for linear and tabular MDPs as examples. Second, we provide new analyses of FQI and Bellman residual minimization to establish the correct pointwise convergence guarantees. As specific cases, our results imply one-over-n rates in linear cases and exponential-in-n rates in tabular cases. We extend our findings to general function approximation by extending our results to regret guarantees based on Lp-convergence rates for estimating the optimal quality function rather than pointwise rates, where L2 guarantees for nonparametric estimation can be ensured under mild conditions.Funding: This work was supported by the Division of Information and Intelligent Systems, National Science Foundation [Grant 1846210].
期刊介绍:
Mathematics of Operations Research is an international journal of the Institute for Operations Research and the Management Sciences (INFORMS). The journal invites articles concerned with the mathematical and computational foundations in the areas of continuous, discrete, and stochastic optimization; mathematical programming; dynamic programming; stochastic processes; stochastic models; simulation methodology; control and adaptation; networks; game theory; and decision theory. Also sought are contributions to learning theory and machine learning that have special relevance to decision making, operations research, and management science. The emphasis is on originality, quality, and importance; correctness alone is not sufficient. Significant developments in operations research and management science not having substantial mathematical interest should be directed to other journals such as Management Science or Operations Research.