Malek Khammassi;Virginia Bordignon;Vincenzo Matta;Ali H. Sayed
{"title":"慢马尔科夫链的自适应社会学习","authors":"Malek Khammassi;Virginia Bordignon;Vincenzo Matta;Ali H. Sayed","doi":"10.1109/TSP.2025.3606580","DOIUrl":null,"url":null,"abstract":"This paper studies the problem of interconnected agents collaborating to track a dynamic state from partially informative observations, where the state follows a slow finite-state Markov chain. While the centralized version of this problem is well understood, the decentralized setting warrants further exploration. This work aims to demonstrate that a decentralized social learning strategy can achieve the same error probability scaling law in the rare transitions regime as the optimal centralized solution. To study this problem, we focus on adaptive social learning (ASL), a recent strategy developed for non-stationary environments, and analyze its performance when the agents’ observations are governed by a hidden, slow Markov chain. Our study yields two key findings. First, we demonstrate that the ASL adaptation performance is closely linked to the dynamics of the underlying Markov chain, achieving a vanishing steady-state error probability when the average drift time of the Markov chain exceeds the ASL adaptation time. Second, we derive a closed-form upper bound for the ASL steady-state error probability in the rare transition regime, showing that it decays similarly to the optimal centralized solution. Simulations illustrate our theoretical findings and provide a comparative analysis with existing strategies.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"3671-3687"},"PeriodicalIF":5.8000,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adaptive Social Learning for Slow Markov Chains\",\"authors\":\"Malek Khammassi;Virginia Bordignon;Vincenzo Matta;Ali H. Sayed\",\"doi\":\"10.1109/TSP.2025.3606580\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper studies the problem of interconnected agents collaborating to track a dynamic state from partially informative observations, where the state follows a slow finite-state Markov chain. While the centralized version of this problem is well understood, the decentralized setting warrants further exploration. This work aims to demonstrate that a decentralized social learning strategy can achieve the same error probability scaling law in the rare transitions regime as the optimal centralized solution. To study this problem, we focus on adaptive social learning (ASL), a recent strategy developed for non-stationary environments, and analyze its performance when the agents’ observations are governed by a hidden, slow Markov chain. Our study yields two key findings. First, we demonstrate that the ASL adaptation performance is closely linked to the dynamics of the underlying Markov chain, achieving a vanishing steady-state error probability when the average drift time of the Markov chain exceeds the ASL adaptation time. Second, we derive a closed-form upper bound for the ASL steady-state error probability in the rare transition regime, showing that it decays similarly to the optimal centralized solution. Simulations illustrate our theoretical findings and provide a comparative analysis with existing strategies.\",\"PeriodicalId\":13330,\"journal\":{\"name\":\"IEEE Transactions on Signal Processing\",\"volume\":\"73 \",\"pages\":\"3671-3687\"},\"PeriodicalIF\":5.8000,\"publicationDate\":\"2025-09-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Signal Processing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11153057/\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11153057/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
This paper studies the problem of interconnected agents collaborating to track a dynamic state from partially informative observations, where the state follows a slow finite-state Markov chain. While the centralized version of this problem is well understood, the decentralized setting warrants further exploration. This work aims to demonstrate that a decentralized social learning strategy can achieve the same error probability scaling law in the rare transitions regime as the optimal centralized solution. To study this problem, we focus on adaptive social learning (ASL), a recent strategy developed for non-stationary environments, and analyze its performance when the agents’ observations are governed by a hidden, slow Markov chain. Our study yields two key findings. First, we demonstrate that the ASL adaptation performance is closely linked to the dynamics of the underlying Markov chain, achieving a vanishing steady-state error probability when the average drift time of the Markov chain exceeds the ASL adaptation time. Second, we derive a closed-form upper bound for the ASL steady-state error probability in the rare transition regime, showing that it decays similarly to the optimal centralized solution. Simulations illustrate our theoretical findings and provide a comparative analysis with existing strategies.
期刊介绍:
The IEEE Transactions on Signal Processing covers novel theory, algorithms, performance analyses and applications of techniques for the processing, understanding, learning, retrieval, mining, and extraction of information from signals. The term “signal” includes, among others, audio, video, speech, image, communication, geophysical, sonar, radar, medical and musical signals. Examples of topics of interest include, but are not limited to, information processing and the theory and application of filtering, coding, transmitting, estimating, detecting, analyzing, recognizing, synthesizing, recording, and reproducing signals.