Nick-Marios T. Kokolakis , Kyriakos G. Vamvoudakis , Wassim M. Haddad
{"title":"用于最优反馈控制的预定义时间强化学习","authors":"Nick-Marios T. Kokolakis , Kyriakos G. Vamvoudakis , Wassim M. Haddad","doi":"10.1016/j.automatica.2025.112421","DOIUrl":null,"url":null,"abstract":"<div><div>In this paper, we develop an online predefined time-convergent reinforcement learning architecture to solve the optimal predefined-time stabilization problem. Specifically, we introduce the problem of optimal predefined-time stabilization to construct feedback controllers that guarantee closed-loop system predefined-time stability while optimizing a given performance measure. The predefined time stability of the closed-loop system is established via a Lyapunov function satisfying a differential inequality while simultaneously serving as a solution to the steady-state Hamilton–Jacobi–Bellman equation ensuring optimality. Given that the Hamilton–Jacobi–Bellman equation is generally difficult to solve, we develop a critic-only reinforcement learning-based algorithm to learn the solution to the steady-state Hamilton–Jacobi–Bellman equation in predefined time. In particular, a non-Lipschitz experience replay-based learning law utilizing recorded and current data is introduced for updating the critic weights to learn the value function. The non-Lipschitz property of the dynamics and letting the learning rate as a function of the predefined time gives rise to predefined-time convergence, while the experience replay-based approach eliminates the need to satisfy the persistence of excitation condition as long as the recorded data set is sufficiently rich. Finally, an illustrative numerical example is provided to demonstrate the efficacy of the proposed approach.</div></div>","PeriodicalId":55413,"journal":{"name":"Automatica","volume":"179 ","pages":"Article 112421"},"PeriodicalIF":4.8000,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Predefined-time reinforcement learning for optimal feedback control\",\"authors\":\"Nick-Marios T. Kokolakis , Kyriakos G. Vamvoudakis , Wassim M. Haddad\",\"doi\":\"10.1016/j.automatica.2025.112421\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In this paper, we develop an online predefined time-convergent reinforcement learning architecture to solve the optimal predefined-time stabilization problem. Specifically, we introduce the problem of optimal predefined-time stabilization to construct feedback controllers that guarantee closed-loop system predefined-time stability while optimizing a given performance measure. The predefined time stability of the closed-loop system is established via a Lyapunov function satisfying a differential inequality while simultaneously serving as a solution to the steady-state Hamilton–Jacobi–Bellman equation ensuring optimality. Given that the Hamilton–Jacobi–Bellman equation is generally difficult to solve, we develop a critic-only reinforcement learning-based algorithm to learn the solution to the steady-state Hamilton–Jacobi–Bellman equation in predefined time. In particular, a non-Lipschitz experience replay-based learning law utilizing recorded and current data is introduced for updating the critic weights to learn the value function. The non-Lipschitz property of the dynamics and letting the learning rate as a function of the predefined time gives rise to predefined-time convergence, while the experience replay-based approach eliminates the need to satisfy the persistence of excitation condition as long as the recorded data set is sufficiently rich. Finally, an illustrative numerical example is provided to demonstrate the efficacy of the proposed approach.</div></div>\",\"PeriodicalId\":55413,\"journal\":{\"name\":\"Automatica\",\"volume\":\"179 \",\"pages\":\"Article 112421\"},\"PeriodicalIF\":4.8000,\"publicationDate\":\"2025-06-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Automatica\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0005109825003152\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Automatica","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0005109825003152","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Predefined-time reinforcement learning for optimal feedback control
In this paper, we develop an online predefined time-convergent reinforcement learning architecture to solve the optimal predefined-time stabilization problem. Specifically, we introduce the problem of optimal predefined-time stabilization to construct feedback controllers that guarantee closed-loop system predefined-time stability while optimizing a given performance measure. The predefined time stability of the closed-loop system is established via a Lyapunov function satisfying a differential inequality while simultaneously serving as a solution to the steady-state Hamilton–Jacobi–Bellman equation ensuring optimality. Given that the Hamilton–Jacobi–Bellman equation is generally difficult to solve, we develop a critic-only reinforcement learning-based algorithm to learn the solution to the steady-state Hamilton–Jacobi–Bellman equation in predefined time. In particular, a non-Lipschitz experience replay-based learning law utilizing recorded and current data is introduced for updating the critic weights to learn the value function. The non-Lipschitz property of the dynamics and letting the learning rate as a function of the predefined time gives rise to predefined-time convergence, while the experience replay-based approach eliminates the need to satisfy the persistence of excitation condition as long as the recorded data set is sufficiently rich. Finally, an illustrative numerical example is provided to demonstrate the efficacy of the proposed approach.
期刊介绍:
Automatica is a leading archival publication in the field of systems and control. The field encompasses today a broad set of areas and topics, and is thriving not only within itself but also in terms of its impact on other fields, such as communications, computers, biology, energy and economics. Since its inception in 1963, Automatica has kept abreast with the evolution of the field over the years, and has emerged as a leading publication driving the trends in the field.
After being founded in 1963, Automatica became a journal of the International Federation of Automatic Control (IFAC) in 1969. It features a characteristic blend of theoretical and applied papers of archival, lasting value, reporting cutting edge research results by authors across the globe. It features articles in distinct categories, including regular, brief and survey papers, technical communiqués, correspondence items, as well as reviews on published books of interest to the readership. It occasionally publishes special issues on emerging new topics or established mature topics of interest to a broad audience.
Automatica solicits original high-quality contributions in all the categories listed above, and in all areas of systems and control interpreted in a broad sense and evolving constantly. They may be submitted directly to a subject editor or to the Editor-in-Chief if not sure about the subject area. Editorial procedures in place assure careful, fair, and prompt handling of all submitted articles. Accepted papers appear in the journal in the shortest time feasible given production time constraints.