Xiongtao Shi , Yanjie Li , Chenglong Du , Chaoyang Chen , Guangdeng Zong , Weihua Gui
{"title":"基于强化学习的完全未知动态马尔可夫跳跃系统最优控制","authors":"Xiongtao Shi , Yanjie Li , Chenglong Du , Chaoyang Chen , Guangdeng Zong , Weihua Gui","doi":"10.1016/j.automatica.2024.111886","DOIUrl":null,"url":null,"abstract":"<div><p>In this paper, the optimal control problem of a class of unknown Markov jump systems (MJSs) is investigated via the parallel policy iteration-based reinforcement learning (PPI-RL) algorithms. First, by solving the linear parallel Lyapunov equation, a model-based PPI-RL algorithm is studied to learn the solution of nonlinear coupled algebraic Riccati equation (CARE) of MJSs with known dynamics, thereby updating the optimal control gain. Then, a novel partially model-free PPI-RL algorithm is proposed for the scenario that the dynamics of the MJS is partially unknown, in which the optimal solution of CARE is learned via the mixed input–output data of all modes. Furthermore, for the MJS with completely unknown dynamics, a completely model-free PPI-RL algorithm is developed to get the optimal control gain by removing the dependence of model information in the process of solving the optimal solution of CARE. It is proved that the proposed PPI-RL algorithms converge to the unique optimal solution of CARE for MJSs with known, partially unknown, and completely unknown dynamics, respectively. Finally, simulation results are illustrated to show the feasibility and effectiveness of the PPI-RL algorithms.</p></div>","PeriodicalId":55413,"journal":{"name":"Automatica","volume":"171 ","pages":"Article 111886"},"PeriodicalIF":4.8000,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0005109824003807/pdfft?md5=884f0aad8f5e53b8556ad35ca7c525f6&pid=1-s2.0-S0005109824003807-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Reinforcement learning-based optimal control for Markov jump systems with completely unknown dynamics\",\"authors\":\"Xiongtao Shi , Yanjie Li , Chenglong Du , Chaoyang Chen , Guangdeng Zong , Weihua Gui\",\"doi\":\"10.1016/j.automatica.2024.111886\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In this paper, the optimal control problem of a class of unknown Markov jump systems (MJSs) is investigated via the parallel policy iteration-based reinforcement learning (PPI-RL) algorithms. First, by solving the linear parallel Lyapunov equation, a model-based PPI-RL algorithm is studied to learn the solution of nonlinear coupled algebraic Riccati equation (CARE) of MJSs with known dynamics, thereby updating the optimal control gain. Then, a novel partially model-free PPI-RL algorithm is proposed for the scenario that the dynamics of the MJS is partially unknown, in which the optimal solution of CARE is learned via the mixed input–output data of all modes. Furthermore, for the MJS with completely unknown dynamics, a completely model-free PPI-RL algorithm is developed to get the optimal control gain by removing the dependence of model information in the process of solving the optimal solution of CARE. It is proved that the proposed PPI-RL algorithms converge to the unique optimal solution of CARE for MJSs with known, partially unknown, and completely unknown dynamics, respectively. Finally, simulation results are illustrated to show the feasibility and effectiveness of the PPI-RL algorithms.</p></div>\",\"PeriodicalId\":55413,\"journal\":{\"name\":\"Automatica\",\"volume\":\"171 \",\"pages\":\"Article 111886\"},\"PeriodicalIF\":4.8000,\"publicationDate\":\"2024-09-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0005109824003807/pdfft?md5=884f0aad8f5e53b8556ad35ca7c525f6&pid=1-s2.0-S0005109824003807-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Automatica\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0005109824003807\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Automatica","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0005109824003807","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
摘要
本文通过基于并行策略迭代的强化学习(PPI-RL)算法研究了一类未知马尔可夫跃迁系统(MJS)的最优控制问题。首先,通过求解线性并行 Lyapunov 方程,研究了一种基于模型的 PPI-RL 算法,以学习已知动力学的 MJS 的非线性耦合代数 Riccati 方程 (CARE) 的解,从而更新最优控制增益。然后,针对 MJS 动态部分未知的情况,提出了一种新型的部分无模型 PPI-RL 算法,其中通过所有模式的混合输入输出数据学习 CARE 的最优解。此外,针对动态完全未知的 MJS,提出了一种完全无模型的 PPI-RL 算法,在求解 CARE 最佳解的过程中消除了对模型信息的依赖,从而获得最佳控制增益。实验证明,对于已知、部分未知和完全未知动力学的 MJS,所提出的 PPI-RL 算法分别收敛于 CARE 的唯一最优解。最后,通过仿真结果说明了 PPI-RL 算法的可行性和有效性。
Reinforcement learning-based optimal control for Markov jump systems with completely unknown dynamics
In this paper, the optimal control problem of a class of unknown Markov jump systems (MJSs) is investigated via the parallel policy iteration-based reinforcement learning (PPI-RL) algorithms. First, by solving the linear parallel Lyapunov equation, a model-based PPI-RL algorithm is studied to learn the solution of nonlinear coupled algebraic Riccati equation (CARE) of MJSs with known dynamics, thereby updating the optimal control gain. Then, a novel partially model-free PPI-RL algorithm is proposed for the scenario that the dynamics of the MJS is partially unknown, in which the optimal solution of CARE is learned via the mixed input–output data of all modes. Furthermore, for the MJS with completely unknown dynamics, a completely model-free PPI-RL algorithm is developed to get the optimal control gain by removing the dependence of model information in the process of solving the optimal solution of CARE. It is proved that the proposed PPI-RL algorithms converge to the unique optimal solution of CARE for MJSs with known, partially unknown, and completely unknown dynamics, respectively. Finally, simulation results are illustrated to show the feasibility and effectiveness of the PPI-RL algorithms.
期刊介绍:
Automatica is a leading archival publication in the field of systems and control. The field encompasses today a broad set of areas and topics, and is thriving not only within itself but also in terms of its impact on other fields, such as communications, computers, biology, energy and economics. Since its inception in 1963, Automatica has kept abreast with the evolution of the field over the years, and has emerged as a leading publication driving the trends in the field.
After being founded in 1963, Automatica became a journal of the International Federation of Automatic Control (IFAC) in 1969. It features a characteristic blend of theoretical and applied papers of archival, lasting value, reporting cutting edge research results by authors across the globe. It features articles in distinct categories, including regular, brief and survey papers, technical communiqués, correspondence items, as well as reviews on published books of interest to the readership. It occasionally publishes special issues on emerging new topics or established mature topics of interest to a broad audience.
Automatica solicits original high-quality contributions in all the categories listed above, and in all areas of systems and control interpreted in a broad sense and evolving constantly. They may be submitted directly to a subject editor or to the Editor-in-Chief if not sure about the subject area. Editorial procedures in place assure careful, fair, and prompt handling of all submitted articles. Accepted papers appear in the journal in the shortest time feasible given production time constraints.