{"title":"An approximate policy iteration viewpoint of actor–critic algorithms","authors":"Zaiwei Chen , Siva Theja Maguluri","doi":"10.1016/j.automatica.2025.112395","DOIUrl":null,"url":null,"abstract":"<div><div>In this work, we establish sample complexity guarantees for a broad class of policy-space algorithms for reinforcement learning. A policy-space algorithm comprises an actor for policy improvement and a critic for policy evaluation. For the actor, we analyze update rules such as softmax, <span><math><mi>ϵ</mi></math></span>-greedy, and the celebrated natural policy gradient (NPG). Unlike traditional gradient-based analyses, we view NPG as an approximate policy iteration method. This perspective allows us to leverage the Bellman operator’s properties to show that NPG (without regularization) achieves geometric convergence to a globally optimal policy with increasing stepsizes. For the critic, we study TD-learning with linear function approximation and off-policy sampling. To address the instability of TD-learning in this setting, we propose a stable framework using multi-step returns and generalized importance sampling factors, including two specific algorithms: <span><math><mi>λ</mi></math></span>-averaged <span><math><mi>Q</mi></math></span>-trace and two-sided <span><math><mi>Q</mi></math></span>-trace. We also provide a finite-sample analysis for the critic. Combining the geometric convergence of the actor with the finite-sample results of the critic, we establish for the first time an overall sample complexity of <span><math><mrow><mover><mrow><mi>O</mi></mrow><mrow><mo>̃</mo></mrow></mover><mrow><mo>(</mo><msup><mrow><mi>ϵ</mi></mrow><mrow><mo>−</mo><mn>2</mn></mrow></msup><mo>)</mo></mrow></mrow></math></span> for finding an optimal policy (up to a function approximation error) using policy-space methods under off-policy sampling and linear function approximation.</div></div>","PeriodicalId":55413,"journal":{"name":"Automatica","volume":"179 ","pages":"Article 112395"},"PeriodicalIF":4.8000,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Automatica","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0005109825002894","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
In this work, we establish sample complexity guarantees for a broad class of policy-space algorithms for reinforcement learning. A policy-space algorithm comprises an actor for policy improvement and a critic for policy evaluation. For the actor, we analyze update rules such as softmax, -greedy, and the celebrated natural policy gradient (NPG). Unlike traditional gradient-based analyses, we view NPG as an approximate policy iteration method. This perspective allows us to leverage the Bellman operator’s properties to show that NPG (without regularization) achieves geometric convergence to a globally optimal policy with increasing stepsizes. For the critic, we study TD-learning with linear function approximation and off-policy sampling. To address the instability of TD-learning in this setting, we propose a stable framework using multi-step returns and generalized importance sampling factors, including two specific algorithms: -averaged -trace and two-sided -trace. We also provide a finite-sample analysis for the critic. Combining the geometric convergence of the actor with the finite-sample results of the critic, we establish for the first time an overall sample complexity of for finding an optimal policy (up to a function approximation error) using policy-space methods under off-policy sampling and linear function approximation.
期刊介绍:
Automatica is a leading archival publication in the field of systems and control. The field encompasses today a broad set of areas and topics, and is thriving not only within itself but also in terms of its impact on other fields, such as communications, computers, biology, energy and economics. Since its inception in 1963, Automatica has kept abreast with the evolution of the field over the years, and has emerged as a leading publication driving the trends in the field.
After being founded in 1963, Automatica became a journal of the International Federation of Automatic Control (IFAC) in 1969. It features a characteristic blend of theoretical and applied papers of archival, lasting value, reporting cutting edge research results by authors across the globe. It features articles in distinct categories, including regular, brief and survey papers, technical communiqués, correspondence items, as well as reviews on published books of interest to the readership. It occasionally publishes special issues on emerging new topics or established mature topics of interest to a broad audience.
Automatica solicits original high-quality contributions in all the categories listed above, and in all areas of systems and control interpreted in a broad sense and evolving constantly. They may be submitted directly to a subject editor or to the Editor-in-Chief if not sure about the subject area. Editorial procedures in place assure careful, fair, and prompt handling of all submitted articles. Accepted papers appear in the journal in the shortest time feasible given production time constraints.