{"title":"用于连续时间非线性系统优化自适应控制的固定时间稳定梯度流","authors":"Mahdi Niroomand, Reihaneh Kardehi Moghaddam, Hamidreza Modares, Mohammad-Bagher Naghibi Sistani","doi":"10.1155/2024/5241035","DOIUrl":null,"url":null,"abstract":"<div>\n <p>This paper introduces an inclusive class of fixed-time stable continuous-time gradient flows (GFs). This class of GFs is then leveraged to learn optimal control solutions for nonlinear systems in fixed time. It is shown that the presented GF guarantees convergence within a fixed time from any initial condition to the exact minimum of functions that satisfy the Polyak–Łojasiewicz (PL) inequality. The presented fixed-time GF is then utilized to design fixed-time optimal adaptive control algorithms. To this end, a fixed-time reinforcement learning (RL) algorithm is developed on the basis of a single network adaptive critic (SNAC) to learn the solution to an infinite-horizon optimal control problem in a fixed-time convergent, online, adaptive, and forward-in-time manner. It is shown that the PL inequality in the presented RL algorithm amounts to a mild inequality condition on a few collected samples. This condition is much weaker than the standard persistence of excitation (PE) and finite duration PE that relies on a rank condition of a dataset. This is crucial for learning-enabled control systems as control systems can commit to learning an optimal controller from the beginning, in sharp contrast to existing results that rely on the PE and rank condition, and can only commit to learning after rich data samples are collected. Simulation results are provided to validate the performance and efficacy of the presented fixed-time RL algorithm.</p>\n </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":5.0000,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/5241035","citationCount":"0","resultStr":"{\"title\":\"Fixed-Time Stable Gradient Flows for Optimal Adaptive Control of Continuous-Time Nonlinear Systems\",\"authors\":\"Mahdi Niroomand, Reihaneh Kardehi Moghaddam, Hamidreza Modares, Mohammad-Bagher Naghibi Sistani\",\"doi\":\"10.1155/2024/5241035\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n <p>This paper introduces an inclusive class of fixed-time stable continuous-time gradient flows (GFs). This class of GFs is then leveraged to learn optimal control solutions for nonlinear systems in fixed time. It is shown that the presented GF guarantees convergence within a fixed time from any initial condition to the exact minimum of functions that satisfy the Polyak–Łojasiewicz (PL) inequality. The presented fixed-time GF is then utilized to design fixed-time optimal adaptive control algorithms. To this end, a fixed-time reinforcement learning (RL) algorithm is developed on the basis of a single network adaptive critic (SNAC) to learn the solution to an infinite-horizon optimal control problem in a fixed-time convergent, online, adaptive, and forward-in-time manner. It is shown that the PL inequality in the presented RL algorithm amounts to a mild inequality condition on a few collected samples. This condition is much weaker than the standard persistence of excitation (PE) and finite duration PE that relies on a rank condition of a dataset. This is crucial for learning-enabled control systems as control systems can commit to learning an optimal controller from the beginning, in sharp contrast to existing results that rely on the PE and rank condition, and can only commit to learning after rich data samples are collected. Simulation results are provided to validate the performance and efficacy of the presented fixed-time RL algorithm.</p>\\n </div>\",\"PeriodicalId\":14089,\"journal\":{\"name\":\"International Journal of Intelligent Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2024-07-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/5241035\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1155/2024/5241035\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/2024/5241035","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
本文介绍了一类固定时间稳定连续时间梯度流(GFs)。然后利用该类梯度流学习固定时间内非线性系统的最优控制解。研究表明,所提出的 GF 能保证在固定时间内从任意初始条件收敛到满足 Polyak-Łojasiewicz (PL) 不等式的函数的精确最小值。提出的固定时间 GF 可用于设计固定时间最优自适应控制算法。为此,我们在单网络自适应批判者(SNAC)的基础上开发了一种固定时间强化学习(RL)算法,以固定时间收敛、在线、自适应和实时前进的方式学习无限视距最优控制问题的解。研究表明,所提出的 RL 算法中的 PL 不等式等同于少数采集样本上的温和不等式条件。这一条件比依赖于数据集等级条件的标准持续激励(PE)和有限持续时间 PE 弱得多。这对于支持学习的控制系统至关重要,因为控制系统可以从一开始就致力于学习最优控制器,这与依赖于 PE 和等级条件的现有结果形成鲜明对比,后者只能在收集到丰富的数据样本后才能致力于学习。仿真结果验证了所介绍的固定时间 RL 算法的性能和功效。
Fixed-Time Stable Gradient Flows for Optimal Adaptive Control of Continuous-Time Nonlinear Systems
This paper introduces an inclusive class of fixed-time stable continuous-time gradient flows (GFs). This class of GFs is then leveraged to learn optimal control solutions for nonlinear systems in fixed time. It is shown that the presented GF guarantees convergence within a fixed time from any initial condition to the exact minimum of functions that satisfy the Polyak–Łojasiewicz (PL) inequality. The presented fixed-time GF is then utilized to design fixed-time optimal adaptive control algorithms. To this end, a fixed-time reinforcement learning (RL) algorithm is developed on the basis of a single network adaptive critic (SNAC) to learn the solution to an infinite-horizon optimal control problem in a fixed-time convergent, online, adaptive, and forward-in-time manner. It is shown that the PL inequality in the presented RL algorithm amounts to a mild inequality condition on a few collected samples. This condition is much weaker than the standard persistence of excitation (PE) and finite duration PE that relies on a rank condition of a dataset. This is crucial for learning-enabled control systems as control systems can commit to learning an optimal controller from the beginning, in sharp contrast to existing results that rely on the PE and rank condition, and can only commit to learning after rich data samples are collected. Simulation results are provided to validate the performance and efficacy of the presented fixed-time RL algorithm.
期刊介绍:
The International Journal of Intelligent Systems serves as a forum for individuals interested in tapping into the vast theories based on intelligent systems construction. With its peer-reviewed format, the journal explores several fascinating editorials written by today''s experts in the field. Because new developments are being introduced each day, there''s much to be learned — examination, analysis creation, information retrieval, man–computer interactions, and more. The International Journal of Intelligent Systems uses charts and illustrations to demonstrate these ground-breaking issues, and encourages readers to share their thoughts and experiences.