{"title":"SMART:使用变压器进行角色分配的顺序多智能体强化学习","authors":"Yixing Lan;Hao Gao;Xin Xu;Qiang Fang;Yujun Zeng","doi":"10.1109/TCDS.2024.3504256","DOIUrl":null,"url":null,"abstract":"Multiagent reinforcement learning (MARL) has received increasing attention and been used to solve cooperative multiagent decision-making and learning control tasks. However, the high complexity of the joint action space and the nonstationary learning process are two major problems that negatively impact on the sample efficiency and solution quality of MARL. To this end, this article proposes a novel approach named sequential MARL with role assignment using transformer (SMART). By learning the effects of different actions on state transitions and rewards, SMART realizes the action abstraction of the original action space and the adaptive role cognitive modeling of multiagent, which reduces the complexity of the multiagent exploration and learning process. Meanwhile, SMART uses causal transformer networks to update role assignment policy and action selection policy sequentially, alleviating the influence of nonstationary multiagent policy learning. The convergence characteristic of SMART is theoretically analyzed. Extensive experiments on the challenging Google football and StarCraft multiagent challenge are conducted, demonstrating that compared with mainstream MARL algorithms such as MAT and HAPPO, SMART achieves a new state-of-the-art performance. Meanwhile, the learned policies through SMART have good generalization ability when the number of agents changes.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 3","pages":"615-630"},"PeriodicalIF":5.0000,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SMART: Sequential Multiagent Reinforcement Learning With Role Assignment Using Transformer\",\"authors\":\"Yixing Lan;Hao Gao;Xin Xu;Qiang Fang;Yujun Zeng\",\"doi\":\"10.1109/TCDS.2024.3504256\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multiagent reinforcement learning (MARL) has received increasing attention and been used to solve cooperative multiagent decision-making and learning control tasks. However, the high complexity of the joint action space and the nonstationary learning process are two major problems that negatively impact on the sample efficiency and solution quality of MARL. To this end, this article proposes a novel approach named sequential MARL with role assignment using transformer (SMART). By learning the effects of different actions on state transitions and rewards, SMART realizes the action abstraction of the original action space and the adaptive role cognitive modeling of multiagent, which reduces the complexity of the multiagent exploration and learning process. Meanwhile, SMART uses causal transformer networks to update role assignment policy and action selection policy sequentially, alleviating the influence of nonstationary multiagent policy learning. The convergence characteristic of SMART is theoretically analyzed. Extensive experiments on the challenging Google football and StarCraft multiagent challenge are conducted, demonstrating that compared with mainstream MARL algorithms such as MAT and HAPPO, SMART achieves a new state-of-the-art performance. Meanwhile, the learned policies through SMART have good generalization ability when the number of agents changes.\",\"PeriodicalId\":54300,\"journal\":{\"name\":\"IEEE Transactions on Cognitive and Developmental Systems\",\"volume\":\"17 3\",\"pages\":\"615-630\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2024-11-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Cognitive and Developmental Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10772002/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cognitive and Developmental Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10772002/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
SMART: Sequential Multiagent Reinforcement Learning With Role Assignment Using Transformer
Multiagent reinforcement learning (MARL) has received increasing attention and been used to solve cooperative multiagent decision-making and learning control tasks. However, the high complexity of the joint action space and the nonstationary learning process are two major problems that negatively impact on the sample efficiency and solution quality of MARL. To this end, this article proposes a novel approach named sequential MARL with role assignment using transformer (SMART). By learning the effects of different actions on state transitions and rewards, SMART realizes the action abstraction of the original action space and the adaptive role cognitive modeling of multiagent, which reduces the complexity of the multiagent exploration and learning process. Meanwhile, SMART uses causal transformer networks to update role assignment policy and action selection policy sequentially, alleviating the influence of nonstationary multiagent policy learning. The convergence characteristic of SMART is theoretically analyzed. Extensive experiments on the challenging Google football and StarCraft multiagent challenge are conducted, demonstrating that compared with mainstream MARL algorithms such as MAT and HAPPO, SMART achieves a new state-of-the-art performance. Meanwhile, the learned policies through SMART have good generalization ability when the number of agents changes.
期刊介绍:
The IEEE Transactions on Cognitive and Developmental Systems (TCDS) focuses on advances in the study of development and cognition in natural (humans, animals) and artificial (robots, agents) systems. It welcomes contributions from multiple related disciplines including cognitive systems, cognitive robotics, developmental and epigenetic robotics, autonomous and evolutionary robotics, social structures, multi-agent and artificial life systems, computational neuroscience, and developmental psychology. Articles on theoretical, computational, application-oriented, and experimental studies as well as reviews in these areas are considered.