{"title":"多代理多臂匪徒中决策的分布式共识算法","authors":"Xiaotong Cheng;Setareh Maghsudi","doi":"10.1109/TCNS.2024.3395850","DOIUrl":null,"url":null,"abstract":"In this article, we study a structured multiagent multiarmed bandit (MAMAB) problem in a dynamic environment. A graph reflects the information-sharing structure among agents, and the arms' reward distributions are piecewise-stationary with several unknown change points. The agents face the identical piecewise-stationary MAB problem. The goal is to develop a decision-making policy for the agents that minimizes the regret, which is the expected total loss of not playing the optimal arm at each time step. Our proposed solution, restarted Bayesian online change point detection in cooperative upper confidence bound (RBO-Coop-UCB) algorithm, involves an efficient multiagent UCB algorithm as its core enhanced with a Bayesian change point detector. We also develop a simple restart decision cooperation that improves decision-making. Theoretically, we establish that the expected group regret of RBO-Coop-UCB is upper bounded by <inline-formula><tex-math>$\\mathcal {O}(KNM\\log T + K\\sqrt{MT\\log T})$</tex-math></inline-formula>, where <inline-formula><tex-math>$K$</tex-math></inline-formula> is the number of agents, <inline-formula><tex-math>$M$</tex-math></inline-formula> is the number of arms, and <inline-formula><tex-math>$T$</tex-math></inline-formula> is the number of time steps. Numerical experiments on synthetic and real-world datasets demonstrate that our proposed method outperforms the state-of-the-art algorithms.","PeriodicalId":56023,"journal":{"name":"IEEE Transactions on Control of Network Systems","volume":"11 4","pages":"2187-2199"},"PeriodicalIF":4.0000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Distributed Consensus Algorithm for Decision-Making in Multiagent Multiarmed Bandit\",\"authors\":\"Xiaotong Cheng;Setareh Maghsudi\",\"doi\":\"10.1109/TCNS.2024.3395850\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this article, we study a structured multiagent multiarmed bandit (MAMAB) problem in a dynamic environment. A graph reflects the information-sharing structure among agents, and the arms' reward distributions are piecewise-stationary with several unknown change points. The agents face the identical piecewise-stationary MAB problem. The goal is to develop a decision-making policy for the agents that minimizes the regret, which is the expected total loss of not playing the optimal arm at each time step. Our proposed solution, restarted Bayesian online change point detection in cooperative upper confidence bound (RBO-Coop-UCB) algorithm, involves an efficient multiagent UCB algorithm as its core enhanced with a Bayesian change point detector. We also develop a simple restart decision cooperation that improves decision-making. Theoretically, we establish that the expected group regret of RBO-Coop-UCB is upper bounded by <inline-formula><tex-math>$\\\\mathcal {O}(KNM\\\\log T + K\\\\sqrt{MT\\\\log T})$</tex-math></inline-formula>, where <inline-formula><tex-math>$K$</tex-math></inline-formula> is the number of agents, <inline-formula><tex-math>$M$</tex-math></inline-formula> is the number of arms, and <inline-formula><tex-math>$T$</tex-math></inline-formula> is the number of time steps. Numerical experiments on synthetic and real-world datasets demonstrate that our proposed method outperforms the state-of-the-art algorithms.\",\"PeriodicalId\":56023,\"journal\":{\"name\":\"IEEE Transactions on Control of Network Systems\",\"volume\":\"11 4\",\"pages\":\"2187-2199\"},\"PeriodicalIF\":4.0000,\"publicationDate\":\"2024-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Control of Network Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10517406/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Control of Network Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10517406/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Distributed Consensus Algorithm for Decision-Making in Multiagent Multiarmed Bandit
In this article, we study a structured multiagent multiarmed bandit (MAMAB) problem in a dynamic environment. A graph reflects the information-sharing structure among agents, and the arms' reward distributions are piecewise-stationary with several unknown change points. The agents face the identical piecewise-stationary MAB problem. The goal is to develop a decision-making policy for the agents that minimizes the regret, which is the expected total loss of not playing the optimal arm at each time step. Our proposed solution, restarted Bayesian online change point detection in cooperative upper confidence bound (RBO-Coop-UCB) algorithm, involves an efficient multiagent UCB algorithm as its core enhanced with a Bayesian change point detector. We also develop a simple restart decision cooperation that improves decision-making. Theoretically, we establish that the expected group regret of RBO-Coop-UCB is upper bounded by $\mathcal {O}(KNM\log T + K\sqrt{MT\log T})$, where $K$ is the number of agents, $M$ is the number of arms, and $T$ is the number of time steps. Numerical experiments on synthetic and real-world datasets demonstrate that our proposed method outperforms the state-of-the-art algorithms.
期刊介绍:
The IEEE Transactions on Control of Network Systems is committed to the timely publication of high-impact papers at the intersection of control systems and network science. In particular, the journal addresses research on the analysis, design and implementation of networked control systems, as well as control over networks. Relevant work includes the full spectrum from basic research on control systems to the design of engineering solutions for automatic control of, and over, networks. The topics covered by this journal include: Coordinated control and estimation over networks, Control and computation over sensor networks, Control under communication constraints, Control and performance analysis issues that arise in the dynamics of networks used in application areas such as communications, computers, transportation, manufacturing, Web ranking and aggregation, social networks, biology, power systems, economics, Synchronization of activities across a controlled network, Stability analysis of controlled networks, Analysis of networks as hybrid dynamical systems.