{"title":"Practical Robust Formation Control for Nonlinear Multiagent Systems via Generative Adversarial Learning Framework: Theory and Experiment","authors":"Nuan Wen;Mir Feroskhan","doi":"10.1109/TSMC.2025.3550255","DOIUrl":null,"url":null,"abstract":"Cyber attacks and disturbances greatly impair the performance of formation tasks in multiagent systems (MASs). To achieve robust formation control against these challenges, this article proposes a generative adversarial learning framework that is theoretically transparent and practically applicable. Rather than relying on an end-to-end deep neural networks (DNNs) architecture, our work leverage a double robust structure that combine the representation capabilities of DNNs with established, theoretically grounded linear control theory, ultimately achieving a practical, learning-based robust formation for MASs. Initially, generative adversarial networks (GANs) are used to linearize agent dynamics under false data injection (FDI) attacks and external disturbances. Subsequently, a proportional-integral (PI) protocol is employed to achieve overall robust formation. We present rigorous theoretical analyses of both stages, demonstrating the guaranteed convergence of GANs training and the closed-loop formation errors. Our approach is directly validated through a series of physical experiments involving multi-quadrotors, demonstrating robustness against attacks and disturbances during formation flights, without the sim-to-real gap commonly encountered in learning-based control frameworks.","PeriodicalId":48915,"journal":{"name":"IEEE Transactions on Systems Man Cybernetics-Systems","volume":"55 6","pages":"4334-4347"},"PeriodicalIF":8.6000,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Systems Man Cybernetics-Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10943219/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Cyber attacks and disturbances greatly impair the performance of formation tasks in multiagent systems (MASs). To achieve robust formation control against these challenges, this article proposes a generative adversarial learning framework that is theoretically transparent and practically applicable. Rather than relying on an end-to-end deep neural networks (DNNs) architecture, our work leverage a double robust structure that combine the representation capabilities of DNNs with established, theoretically grounded linear control theory, ultimately achieving a practical, learning-based robust formation for MASs. Initially, generative adversarial networks (GANs) are used to linearize agent dynamics under false data injection (FDI) attacks and external disturbances. Subsequently, a proportional-integral (PI) protocol is employed to achieve overall robust formation. We present rigorous theoretical analyses of both stages, demonstrating the guaranteed convergence of GANs training and the closed-loop formation errors. Our approach is directly validated through a series of physical experiments involving multi-quadrotors, demonstrating robustness against attacks and disturbances during formation flights, without the sim-to-real gap commonly encountered in learning-based control frameworks.
期刊介绍:
The IEEE Transactions on Systems, Man, and Cybernetics: Systems encompasses the fields of systems engineering, covering issue formulation, analysis, and modeling throughout the systems engineering lifecycle phases. It addresses decision-making, issue interpretation, systems management, processes, and various methods such as optimization, modeling, and simulation in the development and deployment of large systems.