Distributed Minmax Strategy for Consensus Tracking in Differential Graphical Games: A Model-Free Approach

IF 1.9 Q3 COMPUTER SCIENCE, CYBERNETICS
Yan Zhou, Jialing Zhou, Guanghui Wen, Minggang Gan, Tao Yang
{"title":"Distributed Minmax Strategy for Consensus Tracking in Differential Graphical Games: A Model-Free Approach","authors":"Yan Zhou, Jialing Zhou, Guanghui Wen, Minggang Gan, Tao Yang","doi":"10.1109/msmc.2023.3282774","DOIUrl":null,"url":null,"abstract":"This article focuses on the design of distributed minmax strategies for multiagent consensus tracking control problems with completely unknown dynamics in the presence of external disturbances or attacks. Each agent obtains its distributed minmax strategy by solving a multiagent zero-sum differential graphical game, which involves both nonadversarial and adversarial behaviors. Solving such a game is equivalent to solving a game algebraic Riccati equation (GARE). By making slight assumptions concerning performance matrices, <inline-formula xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"><tex-math notation=\"LaTeX\">${\\cal{L}}_{2}$</tex-math></inline-formula> stability and asymptotic stability of the closed-loop consensus error systems are strictly proven. Furthermore, inspired by data-driven off-policy reinforcement learning (RL), a model-free policy iteration (PI) algorithm is presented for each follower to generate the minmax strategy. Finally, simulations are performed to demonstrate the effectiveness of the proposed theoretical results.","PeriodicalId":43649,"journal":{"name":"IEEE Systems Man and Cybernetics Magazine","volume":"59 1","pages":"0"},"PeriodicalIF":1.9000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Systems Man and Cybernetics Magazine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/msmc.2023.3282774","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0

Abstract

This article focuses on the design of distributed minmax strategies for multiagent consensus tracking control problems with completely unknown dynamics in the presence of external disturbances or attacks. Each agent obtains its distributed minmax strategy by solving a multiagent zero-sum differential graphical game, which involves both nonadversarial and adversarial behaviors. Solving such a game is equivalent to solving a game algebraic Riccati equation (GARE). By making slight assumptions concerning performance matrices, ${\cal{L}}_{2}$ stability and asymptotic stability of the closed-loop consensus error systems are strictly proven. Furthermore, inspired by data-driven off-policy reinforcement learning (RL), a model-free policy iteration (PI) algorithm is presented for each follower to generate the minmax strategy. Finally, simulations are performed to demonstrate the effectiveness of the proposed theoretical results.
微分图形博弈共识跟踪的分布式极小策略:一种无模型方法
本文主要研究在存在外部干扰或攻击的情况下,具有完全未知动态的多智能体共识跟踪控制问题的分布式最小最大策略的设计。每个智能体通过求解一个包含非对抗行为和对抗行为的多智能体零和微分图形博弈,得到其分布式最小最大策略。求解这样的博弈相当于求解博弈代数里卡蒂方程(GARE)。通过对性能矩阵稍作假设,严格证明了闭环一致误差系统的${\cal{L}}_{2}$稳定性和渐近稳定性。此外,受数据驱动的离策略强化学习(RL)的启发,提出了一种无模型策略迭代(PI)算法,用于生成最小最大策略。最后通过仿真验证了所提理论结果的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Systems Man and Cybernetics Magazine
IEEE Systems Man and Cybernetics Magazine COMPUTER SCIENCE, CYBERNETICS-
自引率
6.20%
发文量
60
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信