{"title":"Rational inattention in controlled Markov processes","authors":"Ehsan Shafieepoorfard, M. Raginsky, Sean P. Meyn","doi":"10.1109/ACC.2013.6580906","DOIUrl":null,"url":null,"abstract":"The paper poses a general model for optimal control subject to information constraints, motivated in part by recent work on information-constrained decision-making by economic agents. In the average-cost optimal control framework, the general model introduced in this paper reduces to a variant of the linear-programming representation of the average-cost optimal control problem, subject to an additional mutual information constraint on the randomized stationary policy. The resulting optimization problem is convex and admits a decomposition based on the Bellman error, which is the object of study in approximate dynamic programming. The structural results presented in this paper can be used to obtain performance bounds, as well as algorithms for computation or approximation of optimal policies.","PeriodicalId":145065,"journal":{"name":"2013 American Control Conference","volume":"213 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 American Control Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACC.2013.6580906","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
The paper poses a general model for optimal control subject to information constraints, motivated in part by recent work on information-constrained decision-making by economic agents. In the average-cost optimal control framework, the general model introduced in this paper reduces to a variant of the linear-programming representation of the average-cost optimal control problem, subject to an additional mutual information constraint on the randomized stationary policy. The resulting optimization problem is convex and admits a decomposition based on the Bellman error, which is the object of study in approximate dynamic programming. The structural results presented in this paper can be used to obtain performance bounds, as well as algorithms for computation or approximation of optimal policies.