社会群体中的分布式学习动态

L. E. Celis, P. Krafft, Nisheeth K. Vishnoi
{"title":"社会群体中的分布式学习动态","authors":"L. E. Celis, P. Krafft, Nisheeth K. Vishnoi","doi":"10.1145/3087801.3087820","DOIUrl":null,"url":null,"abstract":"We study a distributed learning process observed in human groups and other social animals. This learning process appears in settings in which each individual in a group is trying to decide over time, in a distributed manner, which option to select among a shared set of options. Specifically, we consider a stochastic dynamics in a group in which every individual selects an option in the following two-step process: (1) select a random individual and observe the option that individual chose in the previous time step, and (2) adopt that option if its stochastic quality was good at that time step. Various instantiations of such distributed learning appear in nature, and have also been studied in the social science literature. From the perspective of an individual, an attractive feature of this learning process is that it is a simple heuristic that requires extremely limited computational capacities. But what does it mean for the group -- could such a simple, distributed and essentially memoryless process lead the group as a whole to perform optimally? We show that the answer to this question is yes -- this distributed learning is highly effective at identifying the best option and is close to optimal for the group overall. Our analysis also gives quantitative bounds that show fast convergence of these stochastic dynamics. We prove our result by first defining a (stochastic) infinite population version of these distributed learning dynamics and then combining its strong convergence properties along with its relation to the finite population dynamics. Prior to our work the only theoretical work related to such learning dynamics has been either in deterministic special cases or in the asymptotic setting. Finally, we observe that our infinite population dynamics is a stochastic variant of the classic multiplicative weights update (MWU) method. Consequently, we arrive at the following interesting converse: the learning dynamics on a finite population considered here can be viewed as a novel distributed and low-memory implementation of the classic MWU method.","PeriodicalId":324970,"journal":{"name":"Proceedings of the ACM Symposium on Principles of Distributed Computing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":"{\"title\":\"A Distributed Learning Dynamics in Social Groups\",\"authors\":\"L. E. Celis, P. Krafft, Nisheeth K. Vishnoi\",\"doi\":\"10.1145/3087801.3087820\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We study a distributed learning process observed in human groups and other social animals. This learning process appears in settings in which each individual in a group is trying to decide over time, in a distributed manner, which option to select among a shared set of options. Specifically, we consider a stochastic dynamics in a group in which every individual selects an option in the following two-step process: (1) select a random individual and observe the option that individual chose in the previous time step, and (2) adopt that option if its stochastic quality was good at that time step. Various instantiations of such distributed learning appear in nature, and have also been studied in the social science literature. From the perspective of an individual, an attractive feature of this learning process is that it is a simple heuristic that requires extremely limited computational capacities. But what does it mean for the group -- could such a simple, distributed and essentially memoryless process lead the group as a whole to perform optimally? We show that the answer to this question is yes -- this distributed learning is highly effective at identifying the best option and is close to optimal for the group overall. Our analysis also gives quantitative bounds that show fast convergence of these stochastic dynamics. We prove our result by first defining a (stochastic) infinite population version of these distributed learning dynamics and then combining its strong convergence properties along with its relation to the finite population dynamics. Prior to our work the only theoretical work related to such learning dynamics has been either in deterministic special cases or in the asymptotic setting. Finally, we observe that our infinite population dynamics is a stochastic variant of the classic multiplicative weights update (MWU) method. Consequently, we arrive at the following interesting converse: the learning dynamics on a finite population considered here can be viewed as a novel distributed and low-memory implementation of the classic MWU method.\",\"PeriodicalId\":324970,\"journal\":{\"name\":\"Proceedings of the ACM Symposium on Principles of Distributed Computing\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-05-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"14\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ACM Symposium on Principles of Distributed Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3087801.3087820\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM Symposium on Principles of Distributed Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3087801.3087820","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14

摘要

我们研究了在人类群体和其他社会动物中观察到的分布式学习过程。这种学习过程出现在这样的环境中:一个群体中的每个人都试图在一段时间内,以一种分布式的方式,在一组共享的选项中选择哪个选项。具体地说,我们考虑一个群体中的随机动力学,其中每个个体在两个步骤中选择一个选项:(1)随机选择一个个体,观察该个体在前一个时间步长选择的选项,(2)如果该选项在该时间步长随机质量好,则采用该选项。这种分布式学习的各种实例出现在自然界中,也在社会科学文献中进行了研究。从个人的角度来看,这种学习过程的一个吸引人的特点是,它是一种简单的启发式,只需要极其有限的计算能力。但这对团队意味着什么——这样一个简单的、分布式的、本质上无内存的进程能否使团队整体表现最佳?我们证明了这个问题的答案是肯定的——这种分布式学习在确定最佳选择方面非常有效,并且对整个团队来说接近最优。我们的分析还给出了显示这些随机动力学快速收敛的定量边界。我们首先定义了这些分布式学习动态的(随机)无限种群版本,然后结合其强收敛性质及其与有限种群动态的关系来证明我们的结果。在我们的工作之前,与这种学习动力学相关的唯一理论工作要么是在确定性的特殊情况下,要么是在渐近的情况下。最后,我们观察到我们的无限种群动态是经典的乘权更新(MWU)方法的随机变体。因此,我们得出了以下有趣的相反结论:这里考虑的有限种群上的学习动态可以被视为经典MWU方法的一种新颖的分布式低内存实现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Distributed Learning Dynamics in Social Groups
We study a distributed learning process observed in human groups and other social animals. This learning process appears in settings in which each individual in a group is trying to decide over time, in a distributed manner, which option to select among a shared set of options. Specifically, we consider a stochastic dynamics in a group in which every individual selects an option in the following two-step process: (1) select a random individual and observe the option that individual chose in the previous time step, and (2) adopt that option if its stochastic quality was good at that time step. Various instantiations of such distributed learning appear in nature, and have also been studied in the social science literature. From the perspective of an individual, an attractive feature of this learning process is that it is a simple heuristic that requires extremely limited computational capacities. But what does it mean for the group -- could such a simple, distributed and essentially memoryless process lead the group as a whole to perform optimally? We show that the answer to this question is yes -- this distributed learning is highly effective at identifying the best option and is close to optimal for the group overall. Our analysis also gives quantitative bounds that show fast convergence of these stochastic dynamics. We prove our result by first defining a (stochastic) infinite population version of these distributed learning dynamics and then combining its strong convergence properties along with its relation to the finite population dynamics. Prior to our work the only theoretical work related to such learning dynamics has been either in deterministic special cases or in the asymptotic setting. Finally, we observe that our infinite population dynamics is a stochastic variant of the classic multiplicative weights update (MWU) method. Consequently, we arrive at the following interesting converse: the learning dynamics on a finite population considered here can be viewed as a novel distributed and low-memory implementation of the classic MWU method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信