Diogo S. Carvalho, Pedro A. Santos, Francisco S. Melo
{"title":"强化学习收敛的正则化与双时间尺度","authors":"Diogo S. Carvalho, Pedro A. Santos, Francisco S. Melo","doi":"10.1007/s00245-025-10304-z","DOIUrl":null,"url":null,"abstract":"<div><p>Reinforcement learning algorithms aim at solving discrete time stochastic control problems with unknown underlying dynamical systems by an iterative process of interaction. The process is formalized as a Markov decision process, where at each time step, a control action is given, the system provides a reward, and the state changes stochastically. The objective of the controller is the expected sum of rewards obtained throughout the interaction. When the set of states and or actions is large, it is necessary to use some form of function approximation. But even if the function approximation set is simply a linear span of fixed features, the reinforcement learning algorithms may diverge. In this work, we propose and analyze regularized two-time-scale variations of the algorithms, and prove that they are guaranteed to converge almost-surely to a unique solution to the reinforcement learning problem.</p></div>","PeriodicalId":55566,"journal":{"name":"Applied Mathematics and Optimization","volume":"92 2","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2025-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00245-025-10304-z.pdf","citationCount":"0","resultStr":"{\"title\":\"Regularization and Two Time Scales for Convergence of Reinforcement Learning\",\"authors\":\"Diogo S. Carvalho, Pedro A. Santos, Francisco S. Melo\",\"doi\":\"10.1007/s00245-025-10304-z\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Reinforcement learning algorithms aim at solving discrete time stochastic control problems with unknown underlying dynamical systems by an iterative process of interaction. The process is formalized as a Markov decision process, where at each time step, a control action is given, the system provides a reward, and the state changes stochastically. The objective of the controller is the expected sum of rewards obtained throughout the interaction. When the set of states and or actions is large, it is necessary to use some form of function approximation. But even if the function approximation set is simply a linear span of fixed features, the reinforcement learning algorithms may diverge. In this work, we propose and analyze regularized two-time-scale variations of the algorithms, and prove that they are guaranteed to converge almost-surely to a unique solution to the reinforcement learning problem.</p></div>\",\"PeriodicalId\":55566,\"journal\":{\"name\":\"Applied Mathematics and Optimization\",\"volume\":\"92 2\",\"pages\":\"\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2025-08-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s00245-025-10304-z.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Mathematics and Optimization\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s00245-025-10304-z\",\"RegionNum\":2,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Mathematics and Optimization","FirstCategoryId":"100","ListUrlMain":"https://link.springer.com/article/10.1007/s00245-025-10304-z","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
Regularization and Two Time Scales for Convergence of Reinforcement Learning
Reinforcement learning algorithms aim at solving discrete time stochastic control problems with unknown underlying dynamical systems by an iterative process of interaction. The process is formalized as a Markov decision process, where at each time step, a control action is given, the system provides a reward, and the state changes stochastically. The objective of the controller is the expected sum of rewards obtained throughout the interaction. When the set of states and or actions is large, it is necessary to use some form of function approximation. But even if the function approximation set is simply a linear span of fixed features, the reinforcement learning algorithms may diverge. In this work, we propose and analyze regularized two-time-scale variations of the algorithms, and prove that they are guaranteed to converge almost-surely to a unique solution to the reinforcement learning problem.
期刊介绍:
The Applied Mathematics and Optimization Journal covers a broad range of mathematical methods in particular those that bridge with optimization and have some connection with applications. Core topics include calculus of variations, partial differential equations, stochastic control, optimization of deterministic or stochastic systems in discrete or continuous time, homogenization, control theory, mean field games, dynamic games and optimal transport. Algorithmic, data analytic, machine learning and numerical methods which support the modeling and analysis of optimization problems are encouraged. Of great interest are papers which show some novel idea in either the theory or model which include some connection with potential applications in science and engineering.