Relation-Aware Learning for Multitask Multiagent Cooperative Games

IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yang Yu;Likun Yang;Zhourui Guo;Yongjian Ren;Qiyue Yin;Junge Zhang;Kaiqi Huang
{"title":"Relation-Aware Learning for Multitask Multiagent Cooperative Games","authors":"Yang Yu;Likun Yang;Zhourui Guo;Yongjian Ren;Qiyue Yin;Junge Zhang;Kaiqi Huang","doi":"10.1109/TG.2024.3436871","DOIUrl":null,"url":null,"abstract":"Collaboration among multiple tasks is advantageous for enhancing learning efficiency in multiagent reinforcement learning. To guide agents in cooperating with different teammates in multiple tasks, contemporary approaches encourage agents to exploit common cooperative patterns or identify the learning priorities of multiple tasks. Despite the progress made by these methods, they all assume that all cooperative tasks to be learned are related and desire similar agent policies. This is rarely the case in multiagent cooperation, where minor changes in team composition can lead to significant variations in cooperation, resulting in distinct cooperative strategies compete for limited learning resources. In this article, to tackle the challenge posed by multitask learning in potentially competing cooperative tasks, we propose a novel framework called relation-aware learning (RAL). RAL incorporates a relation awareness module in both task representation and task optimization, aiding in reasoning about task relationships and mitigating negative transfers among dissimilar tasks. To assess the performance of RAL, we conduct a comparative analysis with baseline methods in a multitask <italic>StarCraft</i> environment. The results demonstrate the superiority of RAL in multitask cooperative scenarios, particularly in scenarios involving multiple conflicting tasks.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"17 2","pages":"322-333"},"PeriodicalIF":2.8000,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Games","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10620657/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Collaboration among multiple tasks is advantageous for enhancing learning efficiency in multiagent reinforcement learning. To guide agents in cooperating with different teammates in multiple tasks, contemporary approaches encourage agents to exploit common cooperative patterns or identify the learning priorities of multiple tasks. Despite the progress made by these methods, they all assume that all cooperative tasks to be learned are related and desire similar agent policies. This is rarely the case in multiagent cooperation, where minor changes in team composition can lead to significant variations in cooperation, resulting in distinct cooperative strategies compete for limited learning resources. In this article, to tackle the challenge posed by multitask learning in potentially competing cooperative tasks, we propose a novel framework called relation-aware learning (RAL). RAL incorporates a relation awareness module in both task representation and task optimization, aiding in reasoning about task relationships and mitigating negative transfers among dissimilar tasks. To assess the performance of RAL, we conduct a comparative analysis with baseline methods in a multitask StarCraft environment. The results demonstrate the superiority of RAL in multitask cooperative scenarios, particularly in scenarios involving multiple conflicting tasks.
多任务多代理合作游戏的关系感知学习
在多智能体强化学习中,多任务间的协作有利于提高学习效率。为了引导智能体在多任务中与不同的团队成员合作,当代的方法鼓励智能体利用共同的合作模式或识别多任务的学习优先级。尽管这些方法取得了进展,但它们都假设所有要学习的合作任务都是相关的,并且需要相似的代理策略。在多智能体合作中很少出现这种情况,在多智能体合作中,团队组成的微小变化会导致合作的显著变化,从而导致不同的合作策略争夺有限的学习资源。在本文中,为了解决多任务学习在潜在竞争合作任务中所带来的挑战,我们提出了一个新的框架,称为关系感知学习(RAL)。RAL在任务表示和任务优化中都集成了关系感知模块,有助于对任务关系进行推理,并减轻不同任务之间的负迁移。为了评估RAL的性能,我们在多任务星际争霸环境中与基线方法进行了比较分析。结果表明,在多任务协作场景中,特别是在涉及多个冲突任务的场景中,RAL具有优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Games
IEEE Transactions on Games Engineering-Electrical and Electronic Engineering
CiteScore
4.60
自引率
8.70%
发文量
87
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信