{"title":"鲁棒非贝叶斯社会学习","authors":"Itai Arieli, Y. Babichenko, Segev Shlomov","doi":"10.2139/ssrn.3381563","DOIUrl":null,"url":null,"abstract":"We study non-Bayesian social learning in large networks and binary state space. Agents who are located in a network receive conditionally i.i.d. signals over the state. We refer to the initial distribution of signals as the information structure. In each step, all agents aggregate their belief with the beliefs of their neighbors according to some non-Bayesian rule. We refer to the aggregation rule as the learning dynamic. We say that a dynamic leads to learning if the beliefs of all agents converge to the correct state with a probability that approaches one in an increasing sequence of large networks. We say that a class of information structures p is learnable if there exists a learning dynamic that leads to learning for all information structures in p. Namely, there exists a single learning dynamic that robustly leads to learning for all possible information structures. We provide a necessary and sufficient characterization of learnable classes of information structures. Whenever learning is possible in a class p it is also possible via a virtually additive learning dynamic, where players map beliefs to virtual values and in each period they simply sum up all neighbors' virtual values to deduce their new belief. In addition, we relax the common prior assumption and provide a sufficient condition for learning in the absence of a common prior.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Robust Non-Bayesian Social Learning\",\"authors\":\"Itai Arieli, Y. Babichenko, Segev Shlomov\",\"doi\":\"10.2139/ssrn.3381563\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We study non-Bayesian social learning in large networks and binary state space. Agents who are located in a network receive conditionally i.i.d. signals over the state. We refer to the initial distribution of signals as the information structure. In each step, all agents aggregate their belief with the beliefs of their neighbors according to some non-Bayesian rule. We refer to the aggregation rule as the learning dynamic. We say that a dynamic leads to learning if the beliefs of all agents converge to the correct state with a probability that approaches one in an increasing sequence of large networks. We say that a class of information structures p is learnable if there exists a learning dynamic that leads to learning for all information structures in p. Namely, there exists a single learning dynamic that robustly leads to learning for all possible information structures. We provide a necessary and sufficient characterization of learnable classes of information structures. Whenever learning is possible in a class p it is also possible via a virtually additive learning dynamic, where players map beliefs to virtual values and in each period they simply sum up all neighbors' virtual values to deduce their new belief. In addition, we relax the common prior assumption and provide a sufficient condition for learning in the absence of a common prior.\",\"PeriodicalId\":416173,\"journal\":{\"name\":\"Proceedings of the 2019 ACM Conference on Economics and Computation\",\"volume\":\"48 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-05-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2019 ACM Conference on Economics and Computation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2139/ssrn.3381563\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 ACM Conference on Economics and Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3381563","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
We study non-Bayesian social learning in large networks and binary state space. Agents who are located in a network receive conditionally i.i.d. signals over the state. We refer to the initial distribution of signals as the information structure. In each step, all agents aggregate their belief with the beliefs of their neighbors according to some non-Bayesian rule. We refer to the aggregation rule as the learning dynamic. We say that a dynamic leads to learning if the beliefs of all agents converge to the correct state with a probability that approaches one in an increasing sequence of large networks. We say that a class of information structures p is learnable if there exists a learning dynamic that leads to learning for all information structures in p. Namely, there exists a single learning dynamic that robustly leads to learning for all possible information structures. We provide a necessary and sufficient characterization of learnable classes of information structures. Whenever learning is possible in a class p it is also possible via a virtually additive learning dynamic, where players map beliefs to virtual values and in each period they simply sum up all neighbors' virtual values to deduce their new belief. In addition, we relax the common prior assumption and provide a sufficient condition for learning in the absence of a common prior.