{"title":"Naive Learning Through Probability Matching","authors":"Itai Arieli, Y. Babichenko, Manuel Mueller-Frank","doi":"10.2139/ssrn.3338015","DOIUrl":null,"url":null,"abstract":"We analyze boundedly rational updating in a repeated interaction network model with binary states and actions. We decompose the updating procedure into a deterministic stationary Markov belief updating component inspired by DeGroot updating and pair it with a random probability matching strategy that assigns probabilities to the actions given the underlying boundedly rational belief. This approach allows overcoming the impediments to consensus and naive learning inherent in deterministic updating functions in coarse action environments. We show that if a sequence of growing networks satisfies vanishing influence, then the eventual consensus action equals the realized state with a probability converging to one.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 ACM Conference on Economics and Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3338015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
We analyze boundedly rational updating in a repeated interaction network model with binary states and actions. We decompose the updating procedure into a deterministic stationary Markov belief updating component inspired by DeGroot updating and pair it with a random probability matching strategy that assigns probabilities to the actions given the underlying boundedly rational belief. This approach allows overcoming the impediments to consensus and naive learning inherent in deterministic updating functions in coarse action environments. We show that if a sequence of growing networks satisfies vanishing influence, then the eventual consensus action equals the realized state with a probability converging to one.