Comparison of distance and reinforcement-learning rules in social-influence models

IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Norberto M. Grzywacz
{"title":"Comparison of distance and reinforcement-learning rules in social-influence models","authors":"Norberto M. Grzywacz","doi":"10.1016/j.neucom.2025.130870","DOIUrl":null,"url":null,"abstract":"<div><div>Values are essential for decision-making in people and machines. When a decision is to be made, relevant information is obtained and then, the course of action that maximizes expected value is selected. For people, values have socio-cultural components, with individuals learning from each other. This learning leads the socio-cultural organization of values that includes grouping and polarization. To model this organization, sociologists and social psychologists use Agent-Based Social-Influence models. In standard form, these models use distance-based rules, that is, the degree by which a person influences another is a function of the distance between their values. In this article, we also introduce social-influence rules based on reinforcement learning. This is the mechanism that the brain and artificial intelligence use to learn to optimize values. We report computer simulations of the dynamics of multi-agent, Social-influence models, using either distance or reinforcement-learning rules. Our results show that both types of rules account for grouping and polarization. However, reinforcement-learning rules lead to a better accounting of the number of groups, their polarization, and the degree of individuality, that is, agents not belonging to clusters. Our simulations also reveal factors that influence these results. These factors include contrarians, the number of interacting agents, and the dimensional richness of the value space. Finally, our results with reinforcement-learning rules show complex dynamics of values, including group instability, tipping points, and phase transitions in spontaneous group formation. We discuss the consequences of these results for artificial-intelligence systems learning from each other through social-influence models.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"649 ","pages":"Article 130870"},"PeriodicalIF":6.5000,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225015425","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Values are essential for decision-making in people and machines. When a decision is to be made, relevant information is obtained and then, the course of action that maximizes expected value is selected. For people, values have socio-cultural components, with individuals learning from each other. This learning leads the socio-cultural organization of values that includes grouping and polarization. To model this organization, sociologists and social psychologists use Agent-Based Social-Influence models. In standard form, these models use distance-based rules, that is, the degree by which a person influences another is a function of the distance between their values. In this article, we also introduce social-influence rules based on reinforcement learning. This is the mechanism that the brain and artificial intelligence use to learn to optimize values. We report computer simulations of the dynamics of multi-agent, Social-influence models, using either distance or reinforcement-learning rules. Our results show that both types of rules account for grouping and polarization. However, reinforcement-learning rules lead to a better accounting of the number of groups, their polarization, and the degree of individuality, that is, agents not belonging to clusters. Our simulations also reveal factors that influence these results. These factors include contrarians, the number of interacting agents, and the dimensional richness of the value space. Finally, our results with reinforcement-learning rules show complex dynamics of values, including group instability, tipping points, and phase transitions in spontaneous group formation. We discuss the consequences of these results for artificial-intelligence systems learning from each other through social-influence models.
社会影响模型中距离和强化学习规则的比较
价值观对于人类和机器的决策至关重要。在做出决策时,先获取相关信息,然后选择使期望值最大化的行动方案。对人来说,价值观具有社会文化成分,个体之间相互学习。这种学习导致了价值观的社会文化组织,包括分组和两极分化。为了模拟这种组织,社会学家和社会心理学家使用基于主体的社会影响模型。在标准形式中,这些模型使用基于距离的规则,也就是说,一个人影响另一个人的程度是他们价值观之间距离的函数。在本文中,我们还介绍了基于强化学习的社会影响规则。这是大脑和人工智能用来学习优化价值的机制。我们报告了多智能体动态的计算机模拟,社会影响模型,使用距离或强化学习规则。我们的结果表明,这两种类型的规则都解释了分组和极化。然而,强化学习规则可以更好地计算群体的数量、它们的极化和个性程度,即不属于集群的代理。我们的模拟也揭示了影响这些结果的因素。这些因素包括逆向投资者、交互代理的数量以及价值空间的维度丰富性。最后,我们的强化学习规则的结果显示了复杂的动态值,包括群体不稳定性、临界点和自发群体形成的相变。我们讨论了这些结果对人工智能系统通过社会影响模型相互学习的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信