用于最优反馈控制的预定义时间强化学习

IF 4.8 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Nick-Marios T. Kokolakis , Kyriakos G. Vamvoudakis , Wassim M. Haddad
{"title":"用于最优反馈控制的预定义时间强化学习","authors":"Nick-Marios T. Kokolakis ,&nbsp;Kyriakos G. Vamvoudakis ,&nbsp;Wassim M. Haddad","doi":"10.1016/j.automatica.2025.112421","DOIUrl":null,"url":null,"abstract":"<div><div>In this paper, we develop an online predefined time-convergent reinforcement learning architecture to solve the optimal predefined-time stabilization problem. Specifically, we introduce the problem of optimal predefined-time stabilization to construct feedback controllers that guarantee closed-loop system predefined-time stability while optimizing a given performance measure. The predefined time stability of the closed-loop system is established via a Lyapunov function satisfying a differential inequality while simultaneously serving as a solution to the steady-state Hamilton–Jacobi–Bellman equation ensuring optimality. Given that the Hamilton–Jacobi–Bellman equation is generally difficult to solve, we develop a critic-only reinforcement learning-based algorithm to learn the solution to the steady-state Hamilton–Jacobi–Bellman equation in predefined time. In particular, a non-Lipschitz experience replay-based learning law utilizing recorded and current data is introduced for updating the critic weights to learn the value function. The non-Lipschitz property of the dynamics and letting the learning rate as a function of the predefined time gives rise to predefined-time convergence, while the experience replay-based approach eliminates the need to satisfy the persistence of excitation condition as long as the recorded data set is sufficiently rich. Finally, an illustrative numerical example is provided to demonstrate the efficacy of the proposed approach.</div></div>","PeriodicalId":55413,"journal":{"name":"Automatica","volume":"179 ","pages":"Article 112421"},"PeriodicalIF":4.8000,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Predefined-time reinforcement learning for optimal feedback control\",\"authors\":\"Nick-Marios T. Kokolakis ,&nbsp;Kyriakos G. Vamvoudakis ,&nbsp;Wassim M. Haddad\",\"doi\":\"10.1016/j.automatica.2025.112421\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In this paper, we develop an online predefined time-convergent reinforcement learning architecture to solve the optimal predefined-time stabilization problem. Specifically, we introduce the problem of optimal predefined-time stabilization to construct feedback controllers that guarantee closed-loop system predefined-time stability while optimizing a given performance measure. The predefined time stability of the closed-loop system is established via a Lyapunov function satisfying a differential inequality while simultaneously serving as a solution to the steady-state Hamilton–Jacobi–Bellman equation ensuring optimality. Given that the Hamilton–Jacobi–Bellman equation is generally difficult to solve, we develop a critic-only reinforcement learning-based algorithm to learn the solution to the steady-state Hamilton–Jacobi–Bellman equation in predefined time. In particular, a non-Lipschitz experience replay-based learning law utilizing recorded and current data is introduced for updating the critic weights to learn the value function. The non-Lipschitz property of the dynamics and letting the learning rate as a function of the predefined time gives rise to predefined-time convergence, while the experience replay-based approach eliminates the need to satisfy the persistence of excitation condition as long as the recorded data set is sufficiently rich. Finally, an illustrative numerical example is provided to demonstrate the efficacy of the proposed approach.</div></div>\",\"PeriodicalId\":55413,\"journal\":{\"name\":\"Automatica\",\"volume\":\"179 \",\"pages\":\"Article 112421\"},\"PeriodicalIF\":4.8000,\"publicationDate\":\"2025-06-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Automatica\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0005109825003152\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Automatica","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0005109825003152","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

在本文中,我们开发了一种在线预定义时间收敛强化学习架构来解决最优预定义时间镇定问题。具体来说,我们引入最优预定义时间镇定问题来构造反馈控制器,保证闭环系统的预定义时间稳定性,同时优化给定的性能度量。通过满足微分不等式的Lyapunov函数建立闭环系统的时间稳定性,同时作为保证最优性的稳态Hamilton-Jacobi-Bellman方程的解。考虑到Hamilton-Jacobi-Bellman方程通常难以求解,我们开发了一种仅限临界强化学习的算法来学习在预定义时间内求解稳态Hamilton-Jacobi-Bellman方程。特别地,引入了一种基于非lipschitz经验重放的学习律,利用记录的和当前的数据来更新评价权重以学习价值函数。动力学的非lipschitz性质和让学习率作为预定义时间的函数会导致预定义时间的收敛,而基于经验重放的方法只要记录的数据集足够丰富,就不需要满足激励条件的持久性。最后,通过一个数值算例验证了该方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Predefined-time reinforcement learning for optimal feedback control
In this paper, we develop an online predefined time-convergent reinforcement learning architecture to solve the optimal predefined-time stabilization problem. Specifically, we introduce the problem of optimal predefined-time stabilization to construct feedback controllers that guarantee closed-loop system predefined-time stability while optimizing a given performance measure. The predefined time stability of the closed-loop system is established via a Lyapunov function satisfying a differential inequality while simultaneously serving as a solution to the steady-state Hamilton–Jacobi–Bellman equation ensuring optimality. Given that the Hamilton–Jacobi–Bellman equation is generally difficult to solve, we develop a critic-only reinforcement learning-based algorithm to learn the solution to the steady-state Hamilton–Jacobi–Bellman equation in predefined time. In particular, a non-Lipschitz experience replay-based learning law utilizing recorded and current data is introduced for updating the critic weights to learn the value function. The non-Lipschitz property of the dynamics and letting the learning rate as a function of the predefined time gives rise to predefined-time convergence, while the experience replay-based approach eliminates the need to satisfy the persistence of excitation condition as long as the recorded data set is sufficiently rich. Finally, an illustrative numerical example is provided to demonstrate the efficacy of the proposed approach.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Automatica
Automatica 工程技术-工程:电子与电气
CiteScore
10.70
自引率
7.80%
发文量
617
审稿时长
5 months
期刊介绍: Automatica is a leading archival publication in the field of systems and control. The field encompasses today a broad set of areas and topics, and is thriving not only within itself but also in terms of its impact on other fields, such as communications, computers, biology, energy and economics. Since its inception in 1963, Automatica has kept abreast with the evolution of the field over the years, and has emerged as a leading publication driving the trends in the field. After being founded in 1963, Automatica became a journal of the International Federation of Automatic Control (IFAC) in 1969. It features a characteristic blend of theoretical and applied papers of archival, lasting value, reporting cutting edge research results by authors across the globe. It features articles in distinct categories, including regular, brief and survey papers, technical communiqués, correspondence items, as well as reviews on published books of interest to the readership. It occasionally publishes special issues on emerging new topics or established mature topics of interest to a broad audience. Automatica solicits original high-quality contributions in all the categories listed above, and in all areas of systems and control interpreted in a broad sense and evolving constantly. They may be submitted directly to a subject editor or to the Editor-in-Chief if not sure about the subject area. Editorial procedures in place assure careful, fair, and prompt handling of all submitted articles. Accepted papers appear in the journal in the shortest time feasible given production time constraints.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信