Haijun Ye, Chuanguo Chi, Guojie Qin, Yunlian Kuang
{"title":"网络化空战系统杀伤链优化的多智能体博弈策略","authors":"Haijun Ye, Chuanguo Chi, Guojie Qin, Yunlian Kuang","doi":"10.1049/cth2.70064","DOIUrl":null,"url":null,"abstract":"<p>In modern information warfare, single-agent and multi-agent systems (MAS) play a critical role in achieving integrated combat capabilities. This study examines agent-based game strategies in airborne networked systems, focusing specifically on the air-fleet kill chain—the central framework that unifies fighters, early-warning aircraft, and missiles into a cohesive, intelligent system. We analyse how MAS mitigate the complexity, uncertainty, and adversarial dynamics of aerial combat by optimising detection, decision-making, and strike efficiency through networked collaboration. Two representative scenarios are presented: (1) an AI-driven fighter (ALPHA AI) using genetic-fuzzy tree algorithms to surpass human pilots, and (2) adversarial multi-agent reinforcement learning (MADDPG) in OpenAI's simulation suite. We then propose a systematic MAS-based kill-chain optimisation design that integrates deep reinforcement learning, Bayesian inference, and tactical decision frameworks. Simulation results demonstrate enhanced coordination, real-time adaptability, and optimised damage probabilities in both single- and multi-agent confrontations. Our findings establish a theoretical foundation for transitioning from rule-based to AI-driven system-of-systems warfare in next-generation aerial combat.</p>","PeriodicalId":50382,"journal":{"name":"IET Control Theory and Applications","volume":"19 1","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cth2.70064","citationCount":"0","resultStr":"{\"title\":\"Multi-Agent Game Strategies for Kill Chain Optimization in Networked Aerial Combat Systems\",\"authors\":\"Haijun Ye, Chuanguo Chi, Guojie Qin, Yunlian Kuang\",\"doi\":\"10.1049/cth2.70064\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In modern information warfare, single-agent and multi-agent systems (MAS) play a critical role in achieving integrated combat capabilities. This study examines agent-based game strategies in airborne networked systems, focusing specifically on the air-fleet kill chain—the central framework that unifies fighters, early-warning aircraft, and missiles into a cohesive, intelligent system. We analyse how MAS mitigate the complexity, uncertainty, and adversarial dynamics of aerial combat by optimising detection, decision-making, and strike efficiency through networked collaboration. Two representative scenarios are presented: (1) an AI-driven fighter (ALPHA AI) using genetic-fuzzy tree algorithms to surpass human pilots, and (2) adversarial multi-agent reinforcement learning (MADDPG) in OpenAI's simulation suite. We then propose a systematic MAS-based kill-chain optimisation design that integrates deep reinforcement learning, Bayesian inference, and tactical decision frameworks. Simulation results demonstrate enhanced coordination, real-time adaptability, and optimised damage probabilities in both single- and multi-agent confrontations. Our findings establish a theoretical foundation for transitioning from rule-based to AI-driven system-of-systems warfare in next-generation aerial combat.</p>\",\"PeriodicalId\":50382,\"journal\":{\"name\":\"IET Control Theory and Applications\",\"volume\":\"19 1\",\"pages\":\"\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2025-08-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cth2.70064\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IET Control Theory and Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cth2.70064\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Control Theory and Applications","FirstCategoryId":"94","ListUrlMain":"https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/cth2.70064","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Multi-Agent Game Strategies for Kill Chain Optimization in Networked Aerial Combat Systems
In modern information warfare, single-agent and multi-agent systems (MAS) play a critical role in achieving integrated combat capabilities. This study examines agent-based game strategies in airborne networked systems, focusing specifically on the air-fleet kill chain—the central framework that unifies fighters, early-warning aircraft, and missiles into a cohesive, intelligent system. We analyse how MAS mitigate the complexity, uncertainty, and adversarial dynamics of aerial combat by optimising detection, decision-making, and strike efficiency through networked collaboration. Two representative scenarios are presented: (1) an AI-driven fighter (ALPHA AI) using genetic-fuzzy tree algorithms to surpass human pilots, and (2) adversarial multi-agent reinforcement learning (MADDPG) in OpenAI's simulation suite. We then propose a systematic MAS-based kill-chain optimisation design that integrates deep reinforcement learning, Bayesian inference, and tactical decision frameworks. Simulation results demonstrate enhanced coordination, real-time adaptability, and optimised damage probabilities in both single- and multi-agent confrontations. Our findings establish a theoretical foundation for transitioning from rule-based to AI-driven system-of-systems warfare in next-generation aerial combat.
期刊介绍:
IET Control Theory & Applications is devoted to control systems in the broadest sense, covering new theoretical results and the applications of new and established control methods. Among the topics of interest are system modelling, identification and simulation, the analysis and design of control systems (including computer-aided design), and practical implementation. The scope encompasses technological, economic, physiological (biomedical) and other systems, including man-machine interfaces.
Most of the papers published deal with original work from industrial and government laboratories and universities, but subject reviews and tutorial expositions of current methods are welcomed. Correspondence discussing published papers is also welcomed.
Applications papers need not necessarily involve new theory. Papers which describe new realisations of established methods, or control techniques applied in a novel situation, or practical studies which compare various designs, would be of interest. Of particular value are theoretical papers which discuss the applicability of new work or applications which engender new theoretical applications.