{"title":"The RBMLE method for Reinforcement Learning","authors":"A. Mete, Rahul Singh, P. Kumar","doi":"10.1109/CISS53076.2022.9751189","DOIUrl":null,"url":null,"abstract":"The Reward Biased Maximum Likelihood Estimate (RBMLE) method was proposed about four decades ago for the adaptive control of unknown Markov Decision Processes, and later studied for more general Controlled Markovian Systems and Linear Quadratic Gaussian systems. It showed that if one could bias the Maximum Likelihood Estimate in favor of parameters with larger rewards then one could obtain long-term average optimality. It provided a reason for preferring parameters with larger rewards based on the fact that generally one can only identify the behavior of a system under closed-loop, and therefore any limiting parameter estimate has to necessarily have lower reward than the true parameter. It thereby provided a reason for what his now called “optimism in the face of uncertainty”. It similarly preceded the definition of “regret”, and it is only in the last three years that it has been analyzed for its regret performance, both analytically, and in comparative simulation testing. This paper provides an account of the RBMLE method for reinforcement learning.","PeriodicalId":305918,"journal":{"name":"2022 56th Annual Conference on Information Sciences and Systems (CISS)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 56th Annual Conference on Information Sciences and Systems (CISS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CISS53076.2022.9751189","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
The Reward Biased Maximum Likelihood Estimate (RBMLE) method was proposed about four decades ago for the adaptive control of unknown Markov Decision Processes, and later studied for more general Controlled Markovian Systems and Linear Quadratic Gaussian systems. It showed that if one could bias the Maximum Likelihood Estimate in favor of parameters with larger rewards then one could obtain long-term average optimality. It provided a reason for preferring parameters with larger rewards based on the fact that generally one can only identify the behavior of a system under closed-loop, and therefore any limiting parameter estimate has to necessarily have lower reward than the true parameter. It thereby provided a reason for what his now called “optimism in the face of uncertainty”. It similarly preceded the definition of “regret”, and it is only in the last three years that it has been analyzed for its regret performance, both analytically, and in comparative simulation testing. This paper provides an account of the RBMLE method for reinforcement learning.