Jinjun Rao, Cong Wang, Mei Liu, Jinbo Chen, Jingtao Lei, Wojciech Giernacki
{"title":"A deep reinforcement learning approach and its application in multi-USV adversarial game simulation","authors":"Jinjun Rao, Cong Wang, Mei Liu, Jinbo Chen, Jingtao Lei, Wojciech Giernacki","doi":"10.1007/s10489-025-06380-x","DOIUrl":null,"url":null,"abstract":"<div><p>With the progression of unmanned surface vehicle (USV) intelligence and the maturation of cluster control technologies, intelligent decision-making methods for multi-USV adversarial games have become pivotal technological focuses. Deep reinforcement learning (DRL), a prominent subset of artificial intelligence, has recently achieved notable advancements, heralding significant potential for this field. The intrinsic curiosity module (ICM), self-play (SP), and posthumous credit assignment (POCA) are integrated with proximal policy optimization (PPO) to address the challenges of sparse reward, low sample utilization, and credit assignment in multi-USV adversarial games, and a novel proximal policy optimization algorithm (PPO-ICMSPPOCA) is finally constructed. The algorithm generates intrinsic rewards through iterative training during multi-USV adversarial games while simultaneously addressing the evaluation of each USV's specific contribution to the team and the challenge of varying numbers of USVs. A perturbation mathematical model for a USV with three degrees of freedom is established, considering the influence of external environmental disturbances and variations in the USV's state on its hydrodynamic performance in this paper. With the Unity3D and ML-Agents toolkit platforms, multi-USV adversarial game simulation scenes, which can integrate and load various reinforcement learning (RL) algorithms, have been developed. Symmetric or asymmetric game experiments of different scales are conducted in adversarial games. The experiments show that the red teams with our algorithms can learn adversarial tactics more quickly, such as troop dispersion and coordinated attacks. Over 100 episodes, the red teams with ICM, SP, and POCA achieved win rates of 88.25%, 86.75%, and 91.33%, respectively, exhibiting higher game intelligence while obtaining higher cumulative rewards.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-025-06380-x","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
With the progression of unmanned surface vehicle (USV) intelligence and the maturation of cluster control technologies, intelligent decision-making methods for multi-USV adversarial games have become pivotal technological focuses. Deep reinforcement learning (DRL), a prominent subset of artificial intelligence, has recently achieved notable advancements, heralding significant potential for this field. The intrinsic curiosity module (ICM), self-play (SP), and posthumous credit assignment (POCA) are integrated with proximal policy optimization (PPO) to address the challenges of sparse reward, low sample utilization, and credit assignment in multi-USV adversarial games, and a novel proximal policy optimization algorithm (PPO-ICMSPPOCA) is finally constructed. The algorithm generates intrinsic rewards through iterative training during multi-USV adversarial games while simultaneously addressing the evaluation of each USV's specific contribution to the team and the challenge of varying numbers of USVs. A perturbation mathematical model for a USV with three degrees of freedom is established, considering the influence of external environmental disturbances and variations in the USV's state on its hydrodynamic performance in this paper. With the Unity3D and ML-Agents toolkit platforms, multi-USV adversarial game simulation scenes, which can integrate and load various reinforcement learning (RL) algorithms, have been developed. Symmetric or asymmetric game experiments of different scales are conducted in adversarial games. The experiments show that the red teams with our algorithms can learn adversarial tactics more quickly, such as troop dispersion and coordinated attacks. Over 100 episodes, the red teams with ICM, SP, and POCA achieved win rates of 88.25%, 86.75%, and 91.33%, respectively, exhibiting higher game intelligence while obtaining higher cumulative rewards.
期刊介绍:
With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance.
The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.