Christian Beck, Arnulf Jentzen, Konrad Kleinberg, Thomas Kruse
{"title":"Nonlinear Monte Carlo Methods with Polynomial Runtime for Bellman Equations of Discrete Time High-Dimensional Stochastic Optimal Control Problems","authors":"Christian Beck, Arnulf Jentzen, Konrad Kleinberg, Thomas Kruse","doi":"10.1007/s00245-024-10213-7","DOIUrl":null,"url":null,"abstract":"<div><p>Discrete time <i>stochastic optimal control</i> problems and <i>Markov decision processes</i> (MDPs), respectively, serve as fundamental models for problems that involve sequential decision making under uncertainty and as such constitute the theoretical foundation of <i>reinforcement learning</i>. In this article we study the numerical approximation of MDPs with infinite time horizon, finite control set, and general state spaces. Our set-up in particular covers infinite-horizon optimal stopping problems of discrete time Markov processes. A key tool to solve MDPs are <i>Bellman equations</i> which characterize the value functions of the MDPs and determine the optimal control strategies. By combining ideas from the <i>full-history recursive multilevel Picard approximation method</i>, which was recently introduced to solve certain nonlinear partial differential equations, and ideas from <i>Q</i><i>-learning</i> we introduce a class of suitable <i>nonlinear Monte Carlo methods</i> and prove that the proposed methods do not suffer from the <i>curse of dimensionality</i> in the numerical approximation of the solutions of Bellman equations and the associated discrete time stochastic optimal control problems.</p></div>","PeriodicalId":55566,"journal":{"name":"Applied Mathematics and Optimization","volume":"91 1","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00245-024-10213-7.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Mathematics and Optimization","FirstCategoryId":"100","ListUrlMain":"https://link.springer.com/article/10.1007/s00245-024-10213-7","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0
Abstract
Discrete time stochastic optimal control problems and Markov decision processes (MDPs), respectively, serve as fundamental models for problems that involve sequential decision making under uncertainty and as such constitute the theoretical foundation of reinforcement learning. In this article we study the numerical approximation of MDPs with infinite time horizon, finite control set, and general state spaces. Our set-up in particular covers infinite-horizon optimal stopping problems of discrete time Markov processes. A key tool to solve MDPs are Bellman equations which characterize the value functions of the MDPs and determine the optimal control strategies. By combining ideas from the full-history recursive multilevel Picard approximation method, which was recently introduced to solve certain nonlinear partial differential equations, and ideas from Q-learning we introduce a class of suitable nonlinear Monte Carlo methods and prove that the proposed methods do not suffer from the curse of dimensionality in the numerical approximation of the solutions of Bellman equations and the associated discrete time stochastic optimal control problems.
期刊介绍:
The Applied Mathematics and Optimization Journal covers a broad range of mathematical methods in particular those that bridge with optimization and have some connection with applications. Core topics include calculus of variations, partial differential equations, stochastic control, optimization of deterministic or stochastic systems in discrete or continuous time, homogenization, control theory, mean field games, dynamic games and optimal transport. Algorithmic, data analytic, machine learning and numerical methods which support the modeling and analysis of optimization problems are encouraged. Of great interest are papers which show some novel idea in either the theory or model which include some connection with potential applications in science and engineering.