{"title":"通过团队探索学习与不同的队友协调。","authors":"Hao Ding,Chengxing Jia,Zongzhang Zhang,Cong Guan,Feng Chen,Lei Yuan,Yang Yu","doi":"10.1109/tnnls.2025.3563773","DOIUrl":null,"url":null,"abstract":"Coordinating with different teammates is essential in cooperative multiagent systems (MASs). However, most multiagent reinforcement learning (MARL) methods assume fixed team compositions, which leads to agents overfitting their training partners and failing to cooperate well with different teams during the deployment phase. A common way to mitigate the problem is to anticipate teammate behaviors and adapt policies accordingly during cooperation. However, these methods use the same policy for both collecting information for modeling teammates and maximizing cooperation performance. We argue that these two goals may conflict and reduce the effectiveness of both. In this work, we propose coordinating with different teammates via team probing (CDP), a novel approach that rapidly adapts to different teams by disentangling probing and adaptation phases. Specifically, we first generate a diverse population of teams as training partners with a novel value-based diversity objective. Then, we train a probing module to probe and reveal the coordination pattern of each team with policy-dynamics reconstruction and get a representation space of the population. Finally, we train a generalist meta-policy consisting of several expert policies with module selection based on the clustering of the learned representation space. We empirically show that CDP surpasses existing policy adaptation methods in various complex multiagent scenarios with both seen and unseen teammates.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"102 1","pages":""},"PeriodicalIF":10.2000,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Learning to Coordinate With Different Teammates via Team Probing.\",\"authors\":\"Hao Ding,Chengxing Jia,Zongzhang Zhang,Cong Guan,Feng Chen,Lei Yuan,Yang Yu\",\"doi\":\"10.1109/tnnls.2025.3563773\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Coordinating with different teammates is essential in cooperative multiagent systems (MASs). However, most multiagent reinforcement learning (MARL) methods assume fixed team compositions, which leads to agents overfitting their training partners and failing to cooperate well with different teams during the deployment phase. A common way to mitigate the problem is to anticipate teammate behaviors and adapt policies accordingly during cooperation. However, these methods use the same policy for both collecting information for modeling teammates and maximizing cooperation performance. We argue that these two goals may conflict and reduce the effectiveness of both. In this work, we propose coordinating with different teammates via team probing (CDP), a novel approach that rapidly adapts to different teams by disentangling probing and adaptation phases. Specifically, we first generate a diverse population of teams as training partners with a novel value-based diversity objective. Then, we train a probing module to probe and reveal the coordination pattern of each team with policy-dynamics reconstruction and get a representation space of the population. Finally, we train a generalist meta-policy consisting of several expert policies with module selection based on the clustering of the learned representation space. We empirically show that CDP surpasses existing policy adaptation methods in various complex multiagent scenarios with both seen and unseen teammates.\",\"PeriodicalId\":13303,\"journal\":{\"name\":\"IEEE transactions on neural networks and learning systems\",\"volume\":\"102 1\",\"pages\":\"\"},\"PeriodicalIF\":10.2000,\"publicationDate\":\"2025-05-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on neural networks and learning systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1109/tnnls.2025.3563773\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tnnls.2025.3563773","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Learning to Coordinate With Different Teammates via Team Probing.
Coordinating with different teammates is essential in cooperative multiagent systems (MASs). However, most multiagent reinforcement learning (MARL) methods assume fixed team compositions, which leads to agents overfitting their training partners and failing to cooperate well with different teams during the deployment phase. A common way to mitigate the problem is to anticipate teammate behaviors and adapt policies accordingly during cooperation. However, these methods use the same policy for both collecting information for modeling teammates and maximizing cooperation performance. We argue that these two goals may conflict and reduce the effectiveness of both. In this work, we propose coordinating with different teammates via team probing (CDP), a novel approach that rapidly adapts to different teams by disentangling probing and adaptation phases. Specifically, we first generate a diverse population of teams as training partners with a novel value-based diversity objective. Then, we train a probing module to probe and reveal the coordination pattern of each team with policy-dynamics reconstruction and get a representation space of the population. Finally, we train a generalist meta-policy consisting of several expert policies with module selection based on the clustering of the learned representation space. We empirically show that CDP surpasses existing policy adaptation methods in various complex multiagent scenarios with both seen and unseen teammates.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.