{"title":"Reinforcement Learning H∞ Optimal Formation Control for Perturbed Multiagent Systems With Nonlinear Faults","authors":"Yuxia Wu;Hongjing Liang;Shuxing Xuan;Choon Ki Ahn","doi":"10.1109/TSMC.2024.3516048","DOIUrl":null,"url":null,"abstract":"This article presents an optimal formation control strategy for multiagent systems based on a reinforcement learning (RL) technique, considering prescribed performance and unknown nonlinear faults. To optimize the control performance, an RL strategy is introduced based on the identifier-critic–actor-disturbance structure and backstepping frame. The identifier, critic, actor, and disturbance neural networks (NNs) are employed to estimate unknown dynamics, assess system performance, carry out control actions, and derive the worst disturbance strategy, respectively. With the scheme, the persistent excitation requirements are removed by adopting simplified NNs updating laws, which are derived using the gradient descent method toward designed positive functions instead of the square of Bellman residual. For achieving the desired error precision within the prescribed time, a constraining function and an error transformation scheme are employed. In addition, to enhance the system’s robustness, a fault observer is utilized to compensate for the impact of the unknown nonlinear faults. The stability of the closed-loop system is assured, while the prescribed performance is realized. Finally, simulation examples validate the effectiveness of the proposed optimal control strategy.","PeriodicalId":48915,"journal":{"name":"IEEE Transactions on Systems Man Cybernetics-Systems","volume":"55 3","pages":"1935-1947"},"PeriodicalIF":8.6000,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Systems Man Cybernetics-Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10814669/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
This article presents an optimal formation control strategy for multiagent systems based on a reinforcement learning (RL) technique, considering prescribed performance and unknown nonlinear faults. To optimize the control performance, an RL strategy is introduced based on the identifier-critic–actor-disturbance structure and backstepping frame. The identifier, critic, actor, and disturbance neural networks (NNs) are employed to estimate unknown dynamics, assess system performance, carry out control actions, and derive the worst disturbance strategy, respectively. With the scheme, the persistent excitation requirements are removed by adopting simplified NNs updating laws, which are derived using the gradient descent method toward designed positive functions instead of the square of Bellman residual. For achieving the desired error precision within the prescribed time, a constraining function and an error transformation scheme are employed. In addition, to enhance the system’s robustness, a fault observer is utilized to compensate for the impact of the unknown nonlinear faults. The stability of the closed-loop system is assured, while the prescribed performance is realized. Finally, simulation examples validate the effectiveness of the proposed optimal control strategy.
期刊介绍:
The IEEE Transactions on Systems, Man, and Cybernetics: Systems encompasses the fields of systems engineering, covering issue formulation, analysis, and modeling throughout the systems engineering lifecycle phases. It addresses decision-making, issue interpretation, systems management, processes, and various methods such as optimization, modeling, and simulation in the development and deployment of large systems.