Multi-Agent Evolutionary Reinforcement Learning Based on Cooperative Games

IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Jin Yu;Ya Zhang;Changyin Sun
{"title":"Multi-Agent Evolutionary Reinforcement Learning Based on Cooperative Games","authors":"Jin Yu;Ya Zhang;Changyin Sun","doi":"10.1109/TETCI.2024.3452119","DOIUrl":null,"url":null,"abstract":"Despite the significant advancements in single-agent evolutionary reinforcement learning, research exploring evolutionary reinforcement learning within multi-agent systems is still in its nascent stage. The integration of evolutionary algorithms (EA) and reinforcement learning (RL) has partially mitigated RL's reliance on the environment and provided it with an ample supply of data. Nonetheless, existing studies primarily focus on the indirect collaboration between RL and EA, which lacks sufficient exploration on the effective balance of individual and team rewards. To address this problem, this study introduces game theory to establish a dynamic cooperation framework between EA and RL, and proposes a multi-agent evolutionary reinforcement learning based on cooperative games. This framework facilitates more efficient direct collaboration between RL and EA, enhancing individual rewards while ensuring the attainment of team objectives. Initially, a cooperative policy is formed through a joint network to simplify the parameters of each agent to speed up the overall training process. Subsequently, RL and EA engage in cooperative games to determine whether RL jointly optimizes the same policy based on Pareto optimal results. Lastly, through double objectives optimization, a balance between the two types of rewards is achieved, with EA focusing on team rewards and RL focusing on individual rewards. Experimental results demonstrate that the proposed algorithm outperforms its single-algorithm counterparts in terms of competitiveness.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 2","pages":"1650-1658"},"PeriodicalIF":5.3000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10666798/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Despite the significant advancements in single-agent evolutionary reinforcement learning, research exploring evolutionary reinforcement learning within multi-agent systems is still in its nascent stage. The integration of evolutionary algorithms (EA) and reinforcement learning (RL) has partially mitigated RL's reliance on the environment and provided it with an ample supply of data. Nonetheless, existing studies primarily focus on the indirect collaboration between RL and EA, which lacks sufficient exploration on the effective balance of individual and team rewards. To address this problem, this study introduces game theory to establish a dynamic cooperation framework between EA and RL, and proposes a multi-agent evolutionary reinforcement learning based on cooperative games. This framework facilitates more efficient direct collaboration between RL and EA, enhancing individual rewards while ensuring the attainment of team objectives. Initially, a cooperative policy is formed through a joint network to simplify the parameters of each agent to speed up the overall training process. Subsequently, RL and EA engage in cooperative games to determine whether RL jointly optimizes the same policy based on Pareto optimal results. Lastly, through double objectives optimization, a balance between the two types of rewards is achieved, with EA focusing on team rewards and RL focusing on individual rewards. Experimental results demonstrate that the proposed algorithm outperforms its single-algorithm counterparts in terms of competitiveness.
基于合作博弈的多智能体进化强化学习
尽管单智能体进化强化学习取得了重大进展,但在多智能体系统中探索进化强化学习的研究仍处于起步阶段。进化算法(EA)和强化学习(RL)的集成部分减轻了RL对环境的依赖,并为其提供了充足的数据供应。然而,现有的研究主要集中在RL和EA之间的间接合作,缺乏对个人和团队奖励有效平衡的充分探索。针对这一问题,本研究引入博弈论建立EA与RL之间的动态合作框架,并提出了一种基于合作博弈的多智能体进化强化学习方法。这个框架促进了RL和EA之间更有效的直接协作,在确保团队目标实现的同时提高了个人奖励。首先通过联合网络形成合作策略,简化各agent的参数,加快整体训练过程。随后,RL和EA进行合作博弈,确定RL是否基于Pareto最优结果共同优化同一策略。最后,通过双重目标优化,实现了两种类型奖励之间的平衡,EA侧重于团队奖励,而RL侧重于个人奖励。实验结果表明,该算法在竞争力方面优于单算法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
10.30
自引率
7.50%
发文量
147
期刊介绍: The IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) publishes original articles on emerging aspects of computational intelligence, including theory, applications, and surveys. TETCI is an electronics only publication. TETCI publishes six issues per year. Authors are encouraged to submit manuscripts in any emerging topic in computational intelligence, especially nature-inspired computing topics not covered by other IEEE Computational Intelligence Society journals. A few such illustrative examples are glial cell networks, computational neuroscience, Brain Computer Interface, ambient intelligence, non-fuzzy computing with words, artificial life, cultural learning, artificial endocrine networks, social reasoning, artificial hormone networks, computational intelligence for the IoT and Smart-X technologies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信