{"title":"Comparison of distance and reinforcement-learning rules in social-influence models","authors":"Norberto M. Grzywacz","doi":"10.1016/j.neucom.2025.130870","DOIUrl":null,"url":null,"abstract":"<div><div>Values are essential for decision-making in people and machines. When a decision is to be made, relevant information is obtained and then, the course of action that maximizes expected value is selected. For people, values have socio-cultural components, with individuals learning from each other. This learning leads the socio-cultural organization of values that includes grouping and polarization. To model this organization, sociologists and social psychologists use Agent-Based Social-Influence models. In standard form, these models use distance-based rules, that is, the degree by which a person influences another is a function of the distance between their values. In this article, we also introduce social-influence rules based on reinforcement learning. This is the mechanism that the brain and artificial intelligence use to learn to optimize values. We report computer simulations of the dynamics of multi-agent, Social-influence models, using either distance or reinforcement-learning rules. Our results show that both types of rules account for grouping and polarization. However, reinforcement-learning rules lead to a better accounting of the number of groups, their polarization, and the degree of individuality, that is, agents not belonging to clusters. Our simulations also reveal factors that influence these results. These factors include contrarians, the number of interacting agents, and the dimensional richness of the value space. Finally, our results with reinforcement-learning rules show complex dynamics of values, including group instability, tipping points, and phase transitions in spontaneous group formation. We discuss the consequences of these results for artificial-intelligence systems learning from each other through social-influence models.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"649 ","pages":"Article 130870"},"PeriodicalIF":6.5000,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225015425","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Values are essential for decision-making in people and machines. When a decision is to be made, relevant information is obtained and then, the course of action that maximizes expected value is selected. For people, values have socio-cultural components, with individuals learning from each other. This learning leads the socio-cultural organization of values that includes grouping and polarization. To model this organization, sociologists and social psychologists use Agent-Based Social-Influence models. In standard form, these models use distance-based rules, that is, the degree by which a person influences another is a function of the distance between their values. In this article, we also introduce social-influence rules based on reinforcement learning. This is the mechanism that the brain and artificial intelligence use to learn to optimize values. We report computer simulations of the dynamics of multi-agent, Social-influence models, using either distance or reinforcement-learning rules. Our results show that both types of rules account for grouping and polarization. However, reinforcement-learning rules lead to a better accounting of the number of groups, their polarization, and the degree of individuality, that is, agents not belonging to clusters. Our simulations also reveal factors that influence these results. These factors include contrarians, the number of interacting agents, and the dimensional richness of the value space. Finally, our results with reinforcement-learning rules show complex dynamics of values, including group instability, tipping points, and phase transitions in spontaneous group formation. We discuss the consequences of these results for artificial-intelligence systems learning from each other through social-influence models.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.