{"title":"On the rate of learning in distributed hypothesis testing","authors":"Anusha Lalitha, T. Javidi","doi":"10.1109/ALLERTON.2015.7446979","DOIUrl":null,"url":null,"abstract":"This paper considers a problem of distributed hypothesis testing and cooperative learning. Individual nodes in a network receive noisy local (private) observations whose distribution is parameterized by a discrete parameter (hypotheses). The conditional distributions are known locally at the nodes, but the true parameter/hypothesis is not known. We consider a social (“non-Bayesian”) learning rule from previous literature, in which nodes first perform a Bayesian update of their belief (distribution estimate) of the parameter based on their local observation, communicate these updates to their neighbors, and then perform a “non-Bayesian” linear consensus using the log-beliefs of their neighbors. For this learning rule, we know that under mild assumptions, the belief of any node in any incorrect parameter converges to zero exponentially fast, and the exponential rate of learning is a characterized by the network structure and the divergences between the observations' distributions. Tight bounds on the probability of deviating from this nominal rate in aperiodic networks is derived. The bounds are shown to hold for all conditional distributions which satisfy a mild bounded moment condition.","PeriodicalId":112948,"journal":{"name":"2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ALLERTON.2015.7446979","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
This paper considers a problem of distributed hypothesis testing and cooperative learning. Individual nodes in a network receive noisy local (private) observations whose distribution is parameterized by a discrete parameter (hypotheses). The conditional distributions are known locally at the nodes, but the true parameter/hypothesis is not known. We consider a social (“non-Bayesian”) learning rule from previous literature, in which nodes first perform a Bayesian update of their belief (distribution estimate) of the parameter based on their local observation, communicate these updates to their neighbors, and then perform a “non-Bayesian” linear consensus using the log-beliefs of their neighbors. For this learning rule, we know that under mild assumptions, the belief of any node in any incorrect parameter converges to zero exponentially fast, and the exponential rate of learning is a characterized by the network structure and the divergences between the observations' distributions. Tight bounds on the probability of deviating from this nominal rate in aperiodic networks is derived. The bounds are shown to hold for all conditional distributions which satisfy a mild bounded moment condition.