{"title":"GAILPG: Multiagent Policy Gradient With Generative Adversarial Imitation Learning","authors":"Wei Li;Shiyi Huang;Ziming Qiu;Aiguo Song","doi":"10.1109/TG.2024.3375515","DOIUrl":null,"url":null,"abstract":"In reinforcement learning, the agents need to sufficiently explore the environment and efficiently exploit the existing experiences before finding the solution to the tasks, particularly in cooperative multiagent scenarios where the state and action spaces grow exponentially with the number of agents. Hence, enhancing the exploration ability of agents and improving the utilization efficiency of experiences are two critical issues in cooperative multiagent reinforcement learning. We propose a novel method called generative adversarial imitation learning policy gradients (GAILPG). The contributions of GAILPG are as follows: first, we integrate generative adversarial self-imitation learning into the multiagent actor–critic framework to improve the utilization efficiency of experiences, thus further assisting the policy training; second, we design a new curiosity module to enhance the exploration ability of the agents. Experimental results on the <italic>StarCraft II</i> micromanagement benchmark demonstrate that GAILPG surpasses state-of-the-art policy-based methods and is even on par with the value-based methods and the ablation experiments validate the reasonability of the discriminator module and the curiosity module encapsulated in our method.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"17 1","pages":"62-75"},"PeriodicalIF":1.7000,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Games","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10465634/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In reinforcement learning, the agents need to sufficiently explore the environment and efficiently exploit the existing experiences before finding the solution to the tasks, particularly in cooperative multiagent scenarios where the state and action spaces grow exponentially with the number of agents. Hence, enhancing the exploration ability of agents and improving the utilization efficiency of experiences are two critical issues in cooperative multiagent reinforcement learning. We propose a novel method called generative adversarial imitation learning policy gradients (GAILPG). The contributions of GAILPG are as follows: first, we integrate generative adversarial self-imitation learning into the multiagent actor–critic framework to improve the utilization efficiency of experiences, thus further assisting the policy training; second, we design a new curiosity module to enhance the exploration ability of the agents. Experimental results on the StarCraft II micromanagement benchmark demonstrate that GAILPG surpasses state-of-the-art policy-based methods and is even on par with the value-based methods and the ablation experiments validate the reasonability of the discriminator module and the curiosity module encapsulated in our method.