Jiawei Xia , Yasong Luo , Zhikun Liu , Yalun Zhang , Haoran Shi , Zhong Liu
{"title":"Cooperative multi-target hunting by unmanned surface vehicles based on multi-agent reinforcement learning","authors":"Jiawei Xia , Yasong Luo , Zhikun Liu , Yalun Zhang , Haoran Shi , Zhong Liu","doi":"10.1016/j.dt.2022.09.014","DOIUrl":null,"url":null,"abstract":"<div><p>To solve the problem of multi-target hunting by an unmanned surface vehicle (USV) fleet, a hunting algorithm based on multi-agent reinforcement learning is proposed. Firstly, the hunting environment and kinematic model without boundary constraints are built, and the criteria for successful target capture are given. Then, the cooperative hunting problem of a USV fleet is modeled as a decentralized partially observable Markov decision process (Dec-POMDP), and a distributed partially observable multi-target hunting Proximal Policy Optimization (DPOMH-PPO) algorithm applicable to USVs is proposed. In addition, an observation model, a reward function and the action space applicable to multi-target hunting tasks are designed. To deal with the dynamic change of observational feature dimension input by partially observable systems, a feature embedding block is proposed. By combining the two feature compression methods of column-wise max pooling (CMP) and column-wise average-pooling (CAP), observational feature encoding is established. Finally, the centralized training and decentralized execution framework is adopted to complete the training of hunting strategy. Each USV in the fleet shares the same policy and perform actions independently. Simulation experiments have verified the effectiveness of the DPOMH-PPO algorithm in the test scenarios with different numbers of USVs. Moreover, the advantages of the proposed model are comprehensively analyzed from the aspects of algorithm performance, migration effect in task scenarios and self-organization capability after being damaged, the potential deployment and application of DPOMH-PPO in the real environment is verified.</p></div>","PeriodicalId":58209,"journal":{"name":"Defence Technology(防务技术)","volume":"29 ","pages":"Pages 80-94"},"PeriodicalIF":5.9000,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S221491472200215X/pdfft?md5=cf4ad536e028d655b4325a910fca106c&pid=1-s2.0-S221491472200215X-main.pdf","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Defence Technology(防务技术)","FirstCategoryId":"1087","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S221491472200215X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 6
Abstract
To solve the problem of multi-target hunting by an unmanned surface vehicle (USV) fleet, a hunting algorithm based on multi-agent reinforcement learning is proposed. Firstly, the hunting environment and kinematic model without boundary constraints are built, and the criteria for successful target capture are given. Then, the cooperative hunting problem of a USV fleet is modeled as a decentralized partially observable Markov decision process (Dec-POMDP), and a distributed partially observable multi-target hunting Proximal Policy Optimization (DPOMH-PPO) algorithm applicable to USVs is proposed. In addition, an observation model, a reward function and the action space applicable to multi-target hunting tasks are designed. To deal with the dynamic change of observational feature dimension input by partially observable systems, a feature embedding block is proposed. By combining the two feature compression methods of column-wise max pooling (CMP) and column-wise average-pooling (CAP), observational feature encoding is established. Finally, the centralized training and decentralized execution framework is adopted to complete the training of hunting strategy. Each USV in the fleet shares the same policy and perform actions independently. Simulation experiments have verified the effectiveness of the DPOMH-PPO algorithm in the test scenarios with different numbers of USVs. Moreover, the advantages of the proposed model are comprehensively analyzed from the aspects of algorithm performance, migration effect in task scenarios and self-organization capability after being damaged, the potential deployment and application of DPOMH-PPO in the real environment is verified.
Defence Technology(防务技术)Mechanical Engineering, Control and Systems Engineering, Industrial and Manufacturing Engineering
CiteScore
8.70
自引率
0.00%
发文量
728
审稿时长
25 days
期刊介绍:
Defence Technology, a peer reviewed journal, is published monthly and aims to become the best international academic exchange platform for the research related to defence technology. It publishes original research papers having direct bearing on defence, with a balanced coverage on analytical, experimental, numerical simulation and applied investigations. It covers various disciplines of science, technology and engineering.