{"title":"Improving multi-UAV cooperative path-finding through multiagent experience learning","authors":"Jiang Longting, Wei Ruixuan, Wang Dong","doi":"10.1007/s10489-024-05771-w","DOIUrl":null,"url":null,"abstract":"<div><p>A collaborators’ experiences learning (CEL) algorithm, based on multiagent reinforcement learning (MARL) is presented for multi-UAV cooperative path-finding, where reaching destinations and avoiding obstacles are simultaneously considered as independent or interactive tasks. In this article, we are inspired by the experience learning phenomenon to propose the multiagent experience learning theory based on MARL. A strategy for updating parameters randomly is also suggested to allow homogeneous UAVs to effectively learn cooperative strategies. Additionally, the convergence of this algorithm is theoretically demonstrated. To demonstrate the effectiveness of the algorithm, we conduct experiments with different numbers of UAVs and different algorithms. The experiments show that the proposed method can achieve experience sharing and learning among UAVs and complete the cooperative path-finding task very well in unknown dynamic environments.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"54 21","pages":"11103 - 11119"},"PeriodicalIF":3.4000,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10489-024-05771-w.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-024-05771-w","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
A collaborators’ experiences learning (CEL) algorithm, based on multiagent reinforcement learning (MARL) is presented for multi-UAV cooperative path-finding, where reaching destinations and avoiding obstacles are simultaneously considered as independent or interactive tasks. In this article, we are inspired by the experience learning phenomenon to propose the multiagent experience learning theory based on MARL. A strategy for updating parameters randomly is also suggested to allow homogeneous UAVs to effectively learn cooperative strategies. Additionally, the convergence of this algorithm is theoretically demonstrated. To demonstrate the effectiveness of the algorithm, we conduct experiments with different numbers of UAVs and different algorithms. The experiments show that the proposed method can achieve experience sharing and learning among UAVs and complete the cooperative path-finding task very well in unknown dynamic environments.
期刊介绍:
With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance.
The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.