Yan Zhou, Jialing Zhou, Guanghui Wen, Minggang Gan, Tao Yang
{"title":"Distributed Minmax Strategy for Consensus Tracking in Differential Graphical Games: A Model-Free Approach","authors":"Yan Zhou, Jialing Zhou, Guanghui Wen, Minggang Gan, Tao Yang","doi":"10.1109/msmc.2023.3282774","DOIUrl":null,"url":null,"abstract":"This article focuses on the design of distributed minmax strategies for multiagent consensus tracking control problems with completely unknown dynamics in the presence of external disturbances or attacks. Each agent obtains its distributed minmax strategy by solving a multiagent zero-sum differential graphical game, which involves both nonadversarial and adversarial behaviors. Solving such a game is equivalent to solving a game algebraic Riccati equation (GARE). By making slight assumptions concerning performance matrices, <inline-formula xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"><tex-math notation=\"LaTeX\">${\\cal{L}}_{2}$</tex-math></inline-formula> stability and asymptotic stability of the closed-loop consensus error systems are strictly proven. Furthermore, inspired by data-driven off-policy reinforcement learning (RL), a model-free policy iteration (PI) algorithm is presented for each follower to generate the minmax strategy. Finally, simulations are performed to demonstrate the effectiveness of the proposed theoretical results.","PeriodicalId":43649,"journal":{"name":"IEEE Systems Man and Cybernetics Magazine","volume":"59 1","pages":"0"},"PeriodicalIF":1.9000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Systems Man and Cybernetics Magazine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/msmc.2023.3282774","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0
Abstract
This article focuses on the design of distributed minmax strategies for multiagent consensus tracking control problems with completely unknown dynamics in the presence of external disturbances or attacks. Each agent obtains its distributed minmax strategy by solving a multiagent zero-sum differential graphical game, which involves both nonadversarial and adversarial behaviors. Solving such a game is equivalent to solving a game algebraic Riccati equation (GARE). By making slight assumptions concerning performance matrices, ${\cal{L}}_{2}$ stability and asymptotic stability of the closed-loop consensus error systems are strictly proven. Furthermore, inspired by data-driven off-policy reinforcement learning (RL), a model-free policy iteration (PI) algorithm is presented for each follower to generate the minmax strategy. Finally, simulations are performed to demonstrate the effectiveness of the proposed theoretical results.