{"title":"基于强化学习的多代理合作游戏","authors":"Hongbo Liu","doi":"10.1016/j.hcc.2024.100205","DOIUrl":null,"url":null,"abstract":"<div><p>Multi-agent reinforcement learning holds tremendous potential for revolutionizing intelligent systems across diverse domains. However, it is also concomitant with a set of formidable challenges, which include the effective allocation of credit values to each agent, real-time collaboration among heterogeneous agents, and an appropriate reward function to guide agent behavior. To handle these issues, we propose an innovative solution named the Graph Attention Counterfactual Multiagent Actor–Critic algorithm (GACMAC). This algorithm encompasses several key components: First, it employs a multi-agent actor–critic framework along with counterfactual baselines to assess the individual actions of each agent. Second, it integrates a graph attention network to enhance real-time collaboration among agents, enabling heterogeneous agents to effectively share information during handling tasks. Third, it incorporates prior human knowledge through a potential-based reward shaping method, thereby elevating the convergence speed and stability of the algorithm. We tested our algorithm on the StarCraft Multi-Agent Challenge (SMAC) platform, which is a recognized platform for testing multi-agent algorithms, and our algorithm achieved a win rate of over 95% on the platform, comparable to the current state-of-the-art multi-agent controllers.</p></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"4 1","pages":"Article 100205"},"PeriodicalIF":3.2000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667295224000084/pdfft?md5=0bf06b4b71bd2935634b00877ef59fba&pid=1-s2.0-S2667295224000084-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Cooperative multi-agent game based on reinforcement learning\",\"authors\":\"Hongbo Liu\",\"doi\":\"10.1016/j.hcc.2024.100205\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Multi-agent reinforcement learning holds tremendous potential for revolutionizing intelligent systems across diverse domains. However, it is also concomitant with a set of formidable challenges, which include the effective allocation of credit values to each agent, real-time collaboration among heterogeneous agents, and an appropriate reward function to guide agent behavior. To handle these issues, we propose an innovative solution named the Graph Attention Counterfactual Multiagent Actor–Critic algorithm (GACMAC). This algorithm encompasses several key components: First, it employs a multi-agent actor–critic framework along with counterfactual baselines to assess the individual actions of each agent. Second, it integrates a graph attention network to enhance real-time collaboration among agents, enabling heterogeneous agents to effectively share information during handling tasks. Third, it incorporates prior human knowledge through a potential-based reward shaping method, thereby elevating the convergence speed and stability of the algorithm. We tested our algorithm on the StarCraft Multi-Agent Challenge (SMAC) platform, which is a recognized platform for testing multi-agent algorithms, and our algorithm achieved a win rate of over 95% on the platform, comparable to the current state-of-the-art multi-agent controllers.</p></div>\",\"PeriodicalId\":100605,\"journal\":{\"name\":\"High-Confidence Computing\",\"volume\":\"4 1\",\"pages\":\"Article 100205\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2667295224000084/pdfft?md5=0bf06b4b71bd2935634b00877ef59fba&pid=1-s2.0-S2667295224000084-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"High-Confidence Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2667295224000084\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"High-Confidence Computing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667295224000084","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Cooperative multi-agent game based on reinforcement learning
Multi-agent reinforcement learning holds tremendous potential for revolutionizing intelligent systems across diverse domains. However, it is also concomitant with a set of formidable challenges, which include the effective allocation of credit values to each agent, real-time collaboration among heterogeneous agents, and an appropriate reward function to guide agent behavior. To handle these issues, we propose an innovative solution named the Graph Attention Counterfactual Multiagent Actor–Critic algorithm (GACMAC). This algorithm encompasses several key components: First, it employs a multi-agent actor–critic framework along with counterfactual baselines to assess the individual actions of each agent. Second, it integrates a graph attention network to enhance real-time collaboration among agents, enabling heterogeneous agents to effectively share information during handling tasks. Third, it incorporates prior human knowledge through a potential-based reward shaping method, thereby elevating the convergence speed and stability of the algorithm. We tested our algorithm on the StarCraft Multi-Agent Challenge (SMAC) platform, which is a recognized platform for testing multi-agent algorithms, and our algorithm achieved a win rate of over 95% on the platform, comparable to the current state-of-the-art multi-agent controllers.