{"title":"On the Modeling of Impulse Control with Random Effects for Continuous Markov Processes","authors":"Kurt L. Helmes, Richard H. Stockbridge, Chao Zhu","doi":"10.1137/19m1286967","DOIUrl":null,"url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 1, Page 699-723, February 2024. <br/> Abstract. The use of coordinate processes for the modeling of impulse control for general Markov processes typically involves the construction of a probability measure on a countable product of copies of the path space. In addition, admissibility of an impulse control policy requires that the random times of the interventions be stopping times with respect to different filtrations arising from the different component coordinate processes. When the underlying Markov process has continuous paths, however, a simpler model can be developed which takes the single path space as its probability space and uses the natural filtration with respect to which the intervention times must be stopping times. Moreover, this model construction allows for impulse control with random effects whereby the decision maker selects a distribution of the new state. This paper gives the construction of the probability measure on the path space for an admissible intervention policy subject to a randomized impulse mechanism. In addition, a class of polices is defined for which the paths between interventions are independent and a further subclass for which the cycles following the initial cycle are identically distributed. A benefit of this smaller subclass of policies is that one is allowed to use classical renewal arguments to analyze long-term average control problems. Further, the paper defines a class of stationary impulse policies for which the family of models gives a Markov family. The decision to use an [math] ordering policy in inventory management provides an example of an impulse policy for which the process has independent and identically distributed cycles and the family of models forms a Markov family.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM Journal on Control and Optimization","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1137/19m1286967","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
SIAM Journal on Control and Optimization, Volume 62, Issue 1, Page 699-723, February 2024. Abstract. The use of coordinate processes for the modeling of impulse control for general Markov processes typically involves the construction of a probability measure on a countable product of copies of the path space. In addition, admissibility of an impulse control policy requires that the random times of the interventions be stopping times with respect to different filtrations arising from the different component coordinate processes. When the underlying Markov process has continuous paths, however, a simpler model can be developed which takes the single path space as its probability space and uses the natural filtration with respect to which the intervention times must be stopping times. Moreover, this model construction allows for impulse control with random effects whereby the decision maker selects a distribution of the new state. This paper gives the construction of the probability measure on the path space for an admissible intervention policy subject to a randomized impulse mechanism. In addition, a class of polices is defined for which the paths between interventions are independent and a further subclass for which the cycles following the initial cycle are identically distributed. A benefit of this smaller subclass of policies is that one is allowed to use classical renewal arguments to analyze long-term average control problems. Further, the paper defines a class of stationary impulse policies for which the family of models gives a Markov family. The decision to use an [math] ordering policy in inventory management provides an example of an impulse policy for which the process has independent and identically distributed cycles and the family of models forms a Markov family.
期刊介绍:
SIAM Journal on Control and Optimization (SICON) publishes original research articles on the mathematics and applications of control theory and certain parts of optimization theory. Papers considered for publication must be significant at both the mathematical level and the level of applications or potential applications. Papers containing mostly routine mathematics or those with no discernible connection to control and systems theory or optimization will not be considered for publication. From time to time, the journal will also publish authoritative surveys of important subject areas in control theory and optimization whose level of maturity permits a clear and unified exposition.
The broad areas mentioned above are intended to encompass a wide range of mathematical techniques and scientific, engineering, economic, and industrial applications. These include stochastic and deterministic methods in control, estimation, and identification of systems; modeling and realization of complex control systems; the numerical analysis and related computational methodology of control processes and allied issues; and the development of mathematical theories and techniques that give new insights into old problems or provide the basis for further progress in control theory and optimization. Within the field of optimization, the journal focuses on the parts that are relevant to dynamic and control systems. Contributions to numerical methodology are also welcome in accordance with these aims, especially as related to large-scale problems and decomposition as well as to fundamental questions of convergence and approximation.