{"title":"度量空间中马尔可夫控制过程的线性规划逼近","authors":"O. Hernández-Lerma, J. Lasserre","doi":"10.1109/CDC.1997.657116","DOIUrl":null,"url":null,"abstract":"We develop a general framework to analyze the convergence of linear-programming approximations for Markov control processes in metric spaces. The approximations are based on aggregation and relaxation of constraints, as well as inner approximations of the decision variables. In particular, conditions are given under which the control problem’s optimal value can be approximated by a sequence of finite-dimensional linear programs.","PeriodicalId":229215,"journal":{"name":"Acta Applicandae Mathematica","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1997-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":"{\"title\":\"Linear Programming Approximations for Markov Control Processes in Metric Spaces\",\"authors\":\"O. Hernández-Lerma, J. Lasserre\",\"doi\":\"10.1109/CDC.1997.657116\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We develop a general framework to analyze the convergence of linear-programming approximations for Markov control processes in metric spaces. The approximations are based on aggregation and relaxation of constraints, as well as inner approximations of the decision variables. In particular, conditions are given under which the control problem’s optimal value can be approximated by a sequence of finite-dimensional linear programs.\",\"PeriodicalId\":229215,\"journal\":{\"name\":\"Acta Applicandae Mathematica\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1997-12-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"19\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Acta Applicandae Mathematica\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CDC.1997.657116\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Acta Applicandae Mathematica","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CDC.1997.657116","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Linear Programming Approximations for Markov Control Processes in Metric Spaces
We develop a general framework to analyze the convergence of linear-programming approximations for Markov control processes in metric spaces. The approximations are based on aggregation and relaxation of constraints, as well as inner approximations of the decision variables. In particular, conditions are given under which the control problem’s optimal value can be approximated by a sequence of finite-dimensional linear programs.