A fully value distributional deep reinforcement learning framework for multi-agent cooperation

IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Mingsheng Fu , Liwei Huang , Fan Li , Hong Qu , Chengzhong Xu
{"title":"A fully value distributional deep reinforcement learning framework for multi-agent cooperation","authors":"Mingsheng Fu ,&nbsp;Liwei Huang ,&nbsp;Fan Li ,&nbsp;Hong Qu ,&nbsp;Chengzhong Xu","doi":"10.1016/j.neunet.2024.107035","DOIUrl":null,"url":null,"abstract":"<div><div>Distributional Reinforcement Learning (RL) extends beyond estimating the expected value of future returns by modeling its entire distribution, offering greater expressiveness and capturing deeper insights of the value function. To leverage this advantage, distributional multi-agent systems based on value-decomposition techniques were proposed recently. Ideally, a distributional multi-agent system should be fully distributional, which means both the individual and global value functions should be constructed in distributional forms. However, recent studies show that directly applying traditional value-decomposition techniques to this fully distributional form cannot guarantee the satisfaction of the necessary individual-global-max (IGM) principle. To address this problem, we propose a novel fully value distributional multi-agent framework based on value-decomposition and prove that the IGM principle can be guaranteed under our framework. Based on this framework, a practical deep reinforcement learning model called Fully Distributional Multi-Agent Cooperation (FDMAC) is proposed, and the effectiveness of FDMAC is verified under different scenarios of the StarCraft Multi-Agent Challenge micromanagement environment. Further experimental results show that our FDMAC model can outperform the best baseline by 10.47% on average in terms of the median test win rate.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"184 ","pages":"Article 107035"},"PeriodicalIF":6.0000,"publicationDate":"2024-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S089360802400964X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Distributional Reinforcement Learning (RL) extends beyond estimating the expected value of future returns by modeling its entire distribution, offering greater expressiveness and capturing deeper insights of the value function. To leverage this advantage, distributional multi-agent systems based on value-decomposition techniques were proposed recently. Ideally, a distributional multi-agent system should be fully distributional, which means both the individual and global value functions should be constructed in distributional forms. However, recent studies show that directly applying traditional value-decomposition techniques to this fully distributional form cannot guarantee the satisfaction of the necessary individual-global-max (IGM) principle. To address this problem, we propose a novel fully value distributional multi-agent framework based on value-decomposition and prove that the IGM principle can be guaranteed under our framework. Based on this framework, a practical deep reinforcement learning model called Fully Distributional Multi-Agent Cooperation (FDMAC) is proposed, and the effectiveness of FDMAC is verified under different scenarios of the StarCraft Multi-Agent Challenge micromanagement environment. Further experimental results show that our FDMAC model can outperform the best baseline by 10.47% on average in terms of the median test win rate.
多智能体合作的全价值分布式深度强化学习框架。
分布式强化学习(RL)通过对其整个分布进行建模,超越了对未来回报的预期价值的估计,提供了更强的表达能力,并获得了对价值函数的更深入的见解。为了利用这一优势,最近提出了基于价值分解技术的分布式多智能体系统。理想情况下,分布式多智能体系统应该是完全分布的,即个体价值函数和全局价值函数都应该以分布的形式构建。然而,最近的研究表明,将传统的价值分解技术直接应用于这种完全分布形式并不能保证满足必要的个体-全局-最大(IGM)原则。为了解决这一问题,我们提出了一种新的基于值分解的全值分布式多智能体框架,并证明了在该框架下IGM原则是可以保证的。基于该框架,提出了一种实用的深度强化学习模型——完全分布式多智能体合作模型(Fully Distributional Multi-Agent Cooperation, FDMAC),并在《星际争霸》多智能体挑战微管理环境的不同场景下验证了FDMAC的有效性。进一步的实验结果表明,我们的FDMAC模型在测试中位数胜率方面比最佳基线平均高出10.47%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信