{"title":"有残余强盗反馈的聚合博弈中的分布式无悔学习","authors":"Wenting Liu;Peng Yi","doi":"10.1109/TCNS.2024.3395849","DOIUrl":null,"url":null,"abstract":"This article investigates distributed no-regret learning in repeated aggregative games with bandit feedback. The players lack an explicit model of the game and can only learn their actions based on the sole available feedback of payoff values. In addition, they cannot directly access the aggregate term that contains global information, while each player shares information with its neighbors without revealing its own strategy. We present a novel no-regret learning algorithm named distributed online gradient descent with residual bandit. In the algorithm, each player maintains a local estimate of the aggregate and adaptively adjusts its next action through the residual bandit mechanism and the online gradient descent method. We first provide regret analysis for aggregative games where the player-specific problem is convex, showing crucial associations between the regret bound, network connectivity, and game structure. Then, we prove that when the game is also strictly monotone, the action sequence generated by the algorithm converges to the Nash equilibrium almost surely. Finally, we demonstrate the algorithm performance through numerical simulations on the Cournot game.","PeriodicalId":56023,"journal":{"name":"IEEE Transactions on Control of Network Systems","volume":"11 4","pages":"1734-1745"},"PeriodicalIF":4.0000,"publicationDate":"2024-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Distributed No-Regret Learning in Aggregative Games With Residual Bandit Feedback\",\"authors\":\"Wenting Liu;Peng Yi\",\"doi\":\"10.1109/TCNS.2024.3395849\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This article investigates distributed no-regret learning in repeated aggregative games with bandit feedback. The players lack an explicit model of the game and can only learn their actions based on the sole available feedback of payoff values. In addition, they cannot directly access the aggregate term that contains global information, while each player shares information with its neighbors without revealing its own strategy. We present a novel no-regret learning algorithm named distributed online gradient descent with residual bandit. In the algorithm, each player maintains a local estimate of the aggregate and adaptively adjusts its next action through the residual bandit mechanism and the online gradient descent method. We first provide regret analysis for aggregative games where the player-specific problem is convex, showing crucial associations between the regret bound, network connectivity, and game structure. Then, we prove that when the game is also strictly monotone, the action sequence generated by the algorithm converges to the Nash equilibrium almost surely. Finally, we demonstrate the algorithm performance through numerical simulations on the Cournot game.\",\"PeriodicalId\":56023,\"journal\":{\"name\":\"IEEE Transactions on Control of Network Systems\",\"volume\":\"11 4\",\"pages\":\"1734-1745\"},\"PeriodicalIF\":4.0000,\"publicationDate\":\"2024-03-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Control of Network Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10517445/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Control of Network Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10517445/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Distributed No-Regret Learning in Aggregative Games With Residual Bandit Feedback
This article investigates distributed no-regret learning in repeated aggregative games with bandit feedback. The players lack an explicit model of the game and can only learn their actions based on the sole available feedback of payoff values. In addition, they cannot directly access the aggregate term that contains global information, while each player shares information with its neighbors without revealing its own strategy. We present a novel no-regret learning algorithm named distributed online gradient descent with residual bandit. In the algorithm, each player maintains a local estimate of the aggregate and adaptively adjusts its next action through the residual bandit mechanism and the online gradient descent method. We first provide regret analysis for aggregative games where the player-specific problem is convex, showing crucial associations between the regret bound, network connectivity, and game structure. Then, we prove that when the game is also strictly monotone, the action sequence generated by the algorithm converges to the Nash equilibrium almost surely. Finally, we demonstrate the algorithm performance through numerical simulations on the Cournot game.
期刊介绍:
The IEEE Transactions on Control of Network Systems is committed to the timely publication of high-impact papers at the intersection of control systems and network science. In particular, the journal addresses research on the analysis, design and implementation of networked control systems, as well as control over networks. Relevant work includes the full spectrum from basic research on control systems to the design of engineering solutions for automatic control of, and over, networks. The topics covered by this journal include: Coordinated control and estimation over networks, Control and computation over sensor networks, Control under communication constraints, Control and performance analysis issues that arise in the dynamics of networks used in application areas such as communications, computers, transportation, manufacturing, Web ranking and aggregation, social networks, biology, power systems, economics, Synchronization of activities across a controlled network, Stability analysis of controlled networks, Analysis of networks as hybrid dynamical systems.