{"title":"On the Price of Decentralization in Decentralized Detection","authors":"Bruce Huang;I-Hsiang Wang","doi":"10.1109/TIT.2025.3538468","DOIUrl":null,"url":null,"abstract":"Fundamental limits on the error probabilities of a family of decentralized detection algorithms (e.g., the social learning rule proposed by Lalitha et al., 2018) over directed graphs are investigated. In decentralized detection, a network of nodes locally exchanging information about the samples they observe with their neighbors to collectively infer the underlying unknown hypothesis. Each node in the network weighs the messages received from its neighbors to form its private belief and only requires knowledge of the data generating distribution of its observation. In this work, it is first shown that while the original social learning rule of Lalitha et al., 2018 achieves asymptotically vanishing error probabilities as the number of samples tends to infinity, it suffers a gap in the achievable error exponent compared to the centralized case. The gap is due to the network imbalance caused by the local weights that each node chooses to weigh the messages received from its neighbors. To close this gap, a modified learning rule is proposed and shown to achieve error exponents as large as those in the centralized setup. This implies that there is essentially no first-order penalty caused by decentralization in the exponentially decaying rate of error probabilities. To elucidate the price of decentralization, further analysis on the higher-order asymptotics of the error probability is conducted. It turns out that the price is at most a constant multiplicative factor in the error probability, equivalent to an <inline-formula> <tex-math>$o(1/t)$ </tex-math></inline-formula> additive gap in the error exponent, where <italic>t</i> is the number of samples observed by each agent in the network and the number of rounds of information exchange. This constant depends on the network connectivity and captures the level of network imbalance. Results of simulation on the error probability supporting our learning rule are shown. Further discussions and extensions of results are also presented.","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"71 4","pages":"2341-2359"},"PeriodicalIF":2.2000,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Theory","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10870343/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Fundamental limits on the error probabilities of a family of decentralized detection algorithms (e.g., the social learning rule proposed by Lalitha et al., 2018) over directed graphs are investigated. In decentralized detection, a network of nodes locally exchanging information about the samples they observe with their neighbors to collectively infer the underlying unknown hypothesis. Each node in the network weighs the messages received from its neighbors to form its private belief and only requires knowledge of the data generating distribution of its observation. In this work, it is first shown that while the original social learning rule of Lalitha et al., 2018 achieves asymptotically vanishing error probabilities as the number of samples tends to infinity, it suffers a gap in the achievable error exponent compared to the centralized case. The gap is due to the network imbalance caused by the local weights that each node chooses to weigh the messages received from its neighbors. To close this gap, a modified learning rule is proposed and shown to achieve error exponents as large as those in the centralized setup. This implies that there is essentially no first-order penalty caused by decentralization in the exponentially decaying rate of error probabilities. To elucidate the price of decentralization, further analysis on the higher-order asymptotics of the error probability is conducted. It turns out that the price is at most a constant multiplicative factor in the error probability, equivalent to an $o(1/t)$ additive gap in the error exponent, where t is the number of samples observed by each agent in the network and the number of rounds of information exchange. This constant depends on the network connectivity and captures the level of network imbalance. Results of simulation on the error probability supporting our learning rule are shown. Further discussions and extensions of results are also presented.
期刊介绍:
The IEEE Transactions on Information Theory is a journal that publishes theoretical and experimental papers concerned with the transmission, processing, and utilization of information. The boundaries of acceptable subject matter are intentionally not sharply delimited. Rather, it is hoped that as the focus of research activity changes, a flexible policy will permit this Transactions to follow suit. Current appropriate topics are best reflected by recent Tables of Contents; they are summarized in the titles of editorial areas that appear on the inside front cover.