{"title":"具有一跳邻居的分布式多智能体强化学习与离散者缓解计算","authors":"Baoqian Wang;Junfei Xie;Nikolay Atanasov","doi":"10.1109/TCNS.2024.3511400","DOIUrl":null,"url":null,"abstract":"Most multiagent reinforcement learning (MARL) methods are limited in the scale of problems they can handle. With increasing numbers of agents, the number of training iterations required to find the optimal behaviors increases exponentially due to the exponentially growing joint state and action spaces. This article tackles this limitation by introducing a scalable MARL method called distributed multiagent reinforcement learning with one-hop neighbors (DARL1N). DARL1N is an off-policy actor–critic method that addresses the curse of dimensionality by restricting information exchanges among the agents to one-hop neighbors when representing value and policy functions. Each agent optimizes its value and policy functions over a one-hop neighborhood, significantly reducing the learning complexity, yet maintaining expressiveness by training with varying neighbor numbers and states. This structure allows us to formulate a distributed learning framework to further speed up the training procedure. Distributed computing systems, however, contain <italic>straggler</i> compute nodes, which are slow or unresponsive due to communication bottlenecks, software problems, or hardware problems. To mitigate the detrimental straggler effect, we introduce a novel coded distributed learning architecture, which leverages coding theory to improve the resilience of the learning system to stragglers. Comprehensive experiments show that DARL1N significantly reduces training time without sacrificing policy quality and is scalable as the number of agents increases. Moreover, the coded distributed learning architecture improves training efficiency in the presence of stragglers.","PeriodicalId":56023,"journal":{"name":"IEEE Transactions on Control of Network Systems","volume":"12 2","pages":"1300-1312"},"PeriodicalIF":4.0000,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Distributed Multiagent Reinforcement Learning With One-Hop Neighbors and Compute Straggler Mitigation\",\"authors\":\"Baoqian Wang;Junfei Xie;Nikolay Atanasov\",\"doi\":\"10.1109/TCNS.2024.3511400\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most multiagent reinforcement learning (MARL) methods are limited in the scale of problems they can handle. With increasing numbers of agents, the number of training iterations required to find the optimal behaviors increases exponentially due to the exponentially growing joint state and action spaces. This article tackles this limitation by introducing a scalable MARL method called distributed multiagent reinforcement learning with one-hop neighbors (DARL1N). DARL1N is an off-policy actor–critic method that addresses the curse of dimensionality by restricting information exchanges among the agents to one-hop neighbors when representing value and policy functions. Each agent optimizes its value and policy functions over a one-hop neighborhood, significantly reducing the learning complexity, yet maintaining expressiveness by training with varying neighbor numbers and states. This structure allows us to formulate a distributed learning framework to further speed up the training procedure. Distributed computing systems, however, contain <italic>straggler</i> compute nodes, which are slow or unresponsive due to communication bottlenecks, software problems, or hardware problems. To mitigate the detrimental straggler effect, we introduce a novel coded distributed learning architecture, which leverages coding theory to improve the resilience of the learning system to stragglers. Comprehensive experiments show that DARL1N significantly reduces training time without sacrificing policy quality and is scalable as the number of agents increases. Moreover, the coded distributed learning architecture improves training efficiency in the presence of stragglers.\",\"PeriodicalId\":56023,\"journal\":{\"name\":\"IEEE Transactions on Control of Network Systems\",\"volume\":\"12 2\",\"pages\":\"1300-1312\"},\"PeriodicalIF\":4.0000,\"publicationDate\":\"2024-12-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Control of Network Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10777551/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Control of Network Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10777551/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Distributed Multiagent Reinforcement Learning With One-Hop Neighbors and Compute Straggler Mitigation
Most multiagent reinforcement learning (MARL) methods are limited in the scale of problems they can handle. With increasing numbers of agents, the number of training iterations required to find the optimal behaviors increases exponentially due to the exponentially growing joint state and action spaces. This article tackles this limitation by introducing a scalable MARL method called distributed multiagent reinforcement learning with one-hop neighbors (DARL1N). DARL1N is an off-policy actor–critic method that addresses the curse of dimensionality by restricting information exchanges among the agents to one-hop neighbors when representing value and policy functions. Each agent optimizes its value and policy functions over a one-hop neighborhood, significantly reducing the learning complexity, yet maintaining expressiveness by training with varying neighbor numbers and states. This structure allows us to formulate a distributed learning framework to further speed up the training procedure. Distributed computing systems, however, contain straggler compute nodes, which are slow or unresponsive due to communication bottlenecks, software problems, or hardware problems. To mitigate the detrimental straggler effect, we introduce a novel coded distributed learning architecture, which leverages coding theory to improve the resilience of the learning system to stragglers. Comprehensive experiments show that DARL1N significantly reduces training time without sacrificing policy quality and is scalable as the number of agents increases. Moreover, the coded distributed learning architecture improves training efficiency in the presence of stragglers.
期刊介绍:
The IEEE Transactions on Control of Network Systems is committed to the timely publication of high-impact papers at the intersection of control systems and network science. In particular, the journal addresses research on the analysis, design and implementation of networked control systems, as well as control over networks. Relevant work includes the full spectrum from basic research on control systems to the design of engineering solutions for automatic control of, and over, networks. The topics covered by this journal include: Coordinated control and estimation over networks, Control and computation over sensor networks, Control under communication constraints, Control and performance analysis issues that arise in the dynamics of networks used in application areas such as communications, computers, transportation, manufacturing, Web ranking and aggregation, social networks, biology, power systems, economics, Synchronization of activities across a controlled network, Stability analysis of controlled networks, Analysis of networks as hybrid dynamical systems.