{"title":"MDDP: Making Decisions From Different Perspectives in Multiagent Reinforcement Learning","authors":"Wei Li;Ziming Qiu;Shitong Shao;Aiguo Song","doi":"10.1109/TG.2023.3329376","DOIUrl":null,"url":null,"abstract":"Multiagent reinforcement learning (MARL) has made remarkable progress in recent years. However, in most MARL methods, agents share a policy or value network, which is easy to result in similar behaviors of agents, and thus, limits the flexibility of the method to handle complex tasks. To enhance the diversity of agent behaviors, we propose a novel method, making decisions from different perspectives (MDDP). This method enables agents to switch flexibly between different policy roles and make decisions from different perspectives, which can improve the adaptability of policy learning in complex scenarios. Specifically, in MDDP, we design a new self-attention and gated recurrent unit (GRU)-based dueling architecture network (SG-DAN) to estimate the individual \n<inline-formula><tex-math>$Q$</tex-math></inline-formula>\n-values. SG-DAN contains two components: 1) the new self-attention-based role-switching network (SAR) and the capable GRU-based state value estimation network (GSE). SAR takes charge of action advantage estimation and GSE is responsible for state value estimation. Experimental results on the challenging \n<italic>StarCraft</i>\n II micromanagement benchmark not only verify the modeling reasonability of MDDP but also demonstrate its performance superiority over the related advanced approaches.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"16 3","pages":"621-634"},"PeriodicalIF":1.7000,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Games","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10304394/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Multiagent reinforcement learning (MARL) has made remarkable progress in recent years. However, in most MARL methods, agents share a policy or value network, which is easy to result in similar behaviors of agents, and thus, limits the flexibility of the method to handle complex tasks. To enhance the diversity of agent behaviors, we propose a novel method, making decisions from different perspectives (MDDP). This method enables agents to switch flexibly between different policy roles and make decisions from different perspectives, which can improve the adaptability of policy learning in complex scenarios. Specifically, in MDDP, we design a new self-attention and gated recurrent unit (GRU)-based dueling architecture network (SG-DAN) to estimate the individual
$Q$
-values. SG-DAN contains two components: 1) the new self-attention-based role-switching network (SAR) and the capable GRU-based state value estimation network (GSE). SAR takes charge of action advantage estimation and GSE is responsible for state value estimation. Experimental results on the challenging
StarCraft
II micromanagement benchmark not only verify the modeling reasonability of MDDP but also demonstrate its performance superiority over the related advanced approaches.