Distributed scalable multi-agent reinforcement learning with intrinsic-episodic dual exploration

IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS
Shuhan Qi, Shuhao Zhang, Qiang Wang, Jiajia Zhang, Xuan Wang
{"title":"Distributed scalable multi-agent reinforcement learning with intrinsic-episodic dual exploration","authors":"Shuhan Qi,&nbsp;Shuhao Zhang,&nbsp;Qiang Wang,&nbsp;Jiajia Zhang,&nbsp;Xuan Wang","doi":"10.1016/j.future.2025.108040","DOIUrl":null,"url":null,"abstract":"<div><div>Cooperative multi-agent reinforcement learning still faces challenges in multi-agent exploration and data-efficiency. In this paper, we propose a practical framework named Distributed Scalable Multi-Agent Reinforcement Learning with Intrinsic-Episodic Dual Exploration (SIEMA) to tackle these challenges. Under the widely-applied assumption of centralized training with decentralized execution and value decomposition assumption, SIEMA encourages multi-agent exploration and addresses the issue of low sample utilization through Intrinsic-Episodic Dual Exploration. The Cooperative Exploration Intrinsic Reward (CEIR) component incentivizes exploration from various aspects, incorporating novelty, optimal distance, and cooperative exploration. Episodic Exploration Replay (EER) explores at the episode level, ensuring optimal utilization of all samples in the replay buffer. Furthermore, we introduce the distributed scalable multi-agent training framework to accelerate the learning process and address the issue of low sample generation in MARL by deploying multiple workers and actors in a distributed manner. We illustrate the advantages of SIEMA by ablation experiments, and demonstrate its remarkable superiority over state-of-the-art MARL algorithms on challenging tasks in the StarCraft II micromanagement benchmark.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"175 ","pages":"Article 108040"},"PeriodicalIF":6.2000,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X25003358","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Cooperative multi-agent reinforcement learning still faces challenges in multi-agent exploration and data-efficiency. In this paper, we propose a practical framework named Distributed Scalable Multi-Agent Reinforcement Learning with Intrinsic-Episodic Dual Exploration (SIEMA) to tackle these challenges. Under the widely-applied assumption of centralized training with decentralized execution and value decomposition assumption, SIEMA encourages multi-agent exploration and addresses the issue of low sample utilization through Intrinsic-Episodic Dual Exploration. The Cooperative Exploration Intrinsic Reward (CEIR) component incentivizes exploration from various aspects, incorporating novelty, optimal distance, and cooperative exploration. Episodic Exploration Replay (EER) explores at the episode level, ensuring optimal utilization of all samples in the replay buffer. Furthermore, we introduce the distributed scalable multi-agent training framework to accelerate the learning process and address the issue of low sample generation in MARL by deploying multiple workers and actors in a distributed manner. We illustrate the advantages of SIEMA by ablation experiments, and demonstrate its remarkable superiority over state-of-the-art MARL algorithms on challenging tasks in the StarCraft II micromanagement benchmark.
基于内在情景双重探索的分布式可扩展多智能体强化学习
协作式多智能体强化学习在多智能体探索和数据效率方面仍面临挑战。在本文中,我们提出了一个实用的框架,称为分布式可扩展多智能体强化学习与内在情景双重探索(SIEMA)来解决这些挑战。在广泛应用的集中训练分散执行假设和价值分解假设下,SIEMA鼓励多智能体探索,并通过内在-情景双重探索解决样本利用率低的问题。合作探索内在奖励(CEIR)成分从多个方面激励探索,包括新颖性、最优距离和合作探索。情节探索回放(EER)在情节水平上进行探索,确保回放缓冲区中所有样本的最佳利用。此外,我们引入了分布式可扩展的多智能体训练框架,以加速学习过程,并通过以分布式方式部署多个工作人员和参与者来解决MARL中低样本生成的问题。我们通过烧蚀实验说明了SIEMA的优势,并在《星际争霸2》微管理基准的挑战性任务中证明了它比最先进的MARL算法的显著优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
19.90
自引率
2.70%
发文量
376
审稿时长
10.6 months
期刊介绍: Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications. Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration. Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信