q学习促进了具有拓扑结构的元博弈模型规范的出现

IF 5.6 1区 数学 Q1 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS
Wei Zhang , Dongkai Zhao , Xing Jin , Huizhen Zhang , Tianbo An , Guanghai Cui , Zhen Wang
{"title":"q学习促进了具有拓扑结构的元博弈模型规范的出现","authors":"Wei Zhang ,&nbsp;Dongkai Zhao ,&nbsp;Xing Jin ,&nbsp;Huizhen Zhang ,&nbsp;Tianbo An ,&nbsp;Guanghai Cui ,&nbsp;Zhen Wang","doi":"10.1016/j.chaos.2025.116297","DOIUrl":null,"url":null,"abstract":"<div><div>Axelrod’s model and its subsequent studies have become a valuable framework for fostering cooperation norms among self-interested agents. Within this framework, the concepts of “boldness” and “vengefulness” are specifically employed to characterize agents’ behaviors in terms of cooperation and punishment (including metapunishment). Describing behavior solely through the parameters B and V may be overly simplistic and lacks generalizability, making it difficult to apply to other scenarios. Moreover, privacy concerns and the difficulty of evaluating complex states in real-world scenarios limit agents’ access to detailed payoff information from their neighbors. To address these questions, our paper employs self-regarding Q-learning, a well-established method for examining the dynamics of strategy updates and agents’ learning processes, to investigate whether metanorms can naturally emerge through players’ strategy selection. Through extensive experiments, we observe cooperative norms’ successful emergence driven by agents’ strategy selection variations. Over 90% of agents choose to cooperate on average. In subsequent analyses, we explore the underlying reasons for the emergence of cooperative norms from perspectives of changes in Q-values, punishment and metapunishment frequencies. Additionally, we examine the impact of topological structures on players’ strategy selection and assess the emergence of norms across different temptation levels, population sizes, and regulatory intensity levels to validate the model’s sensitivity.</div></div>","PeriodicalId":9764,"journal":{"name":"Chaos Solitons & Fractals","volume":"195 ","pages":"Article 116297"},"PeriodicalIF":5.6000,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Q-learning facilitates norm emergence in metanorm game model with topological structures\",\"authors\":\"Wei Zhang ,&nbsp;Dongkai Zhao ,&nbsp;Xing Jin ,&nbsp;Huizhen Zhang ,&nbsp;Tianbo An ,&nbsp;Guanghai Cui ,&nbsp;Zhen Wang\",\"doi\":\"10.1016/j.chaos.2025.116297\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Axelrod’s model and its subsequent studies have become a valuable framework for fostering cooperation norms among self-interested agents. Within this framework, the concepts of “boldness” and “vengefulness” are specifically employed to characterize agents’ behaviors in terms of cooperation and punishment (including metapunishment). Describing behavior solely through the parameters B and V may be overly simplistic and lacks generalizability, making it difficult to apply to other scenarios. Moreover, privacy concerns and the difficulty of evaluating complex states in real-world scenarios limit agents’ access to detailed payoff information from their neighbors. To address these questions, our paper employs self-regarding Q-learning, a well-established method for examining the dynamics of strategy updates and agents’ learning processes, to investigate whether metanorms can naturally emerge through players’ strategy selection. Through extensive experiments, we observe cooperative norms’ successful emergence driven by agents’ strategy selection variations. Over 90% of agents choose to cooperate on average. In subsequent analyses, we explore the underlying reasons for the emergence of cooperative norms from perspectives of changes in Q-values, punishment and metapunishment frequencies. Additionally, we examine the impact of topological structures on players’ strategy selection and assess the emergence of norms across different temptation levels, population sizes, and regulatory intensity levels to validate the model’s sensitivity.</div></div>\",\"PeriodicalId\":9764,\"journal\":{\"name\":\"Chaos Solitons & Fractals\",\"volume\":\"195 \",\"pages\":\"Article 116297\"},\"PeriodicalIF\":5.6000,\"publicationDate\":\"2025-03-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Chaos Solitons & Fractals\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0960077925003108\",\"RegionNum\":1,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chaos Solitons & Fractals","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0960077925003108","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

阿克塞尔罗德的模型及其后续研究已经成为促进自利主体之间合作规范的一个有价值的框架。在这个框架中,“大胆”和“报复”的概念被专门用来描述代理人在合作和惩罚(包括元惩罚)方面的行为。仅通过参数B和V来描述行为可能过于简单,缺乏通用性,难以应用于其他场景。此外,在现实场景中,隐私问题和评估复杂状态的困难限制了代理从邻居那里获得详细的支付信息。为了解决这些问题,我们的论文采用了自相关q学习(一种用于检查策略更新动态和代理学习过程的成熟方法)来研究元态是否可以通过玩家的策略选择自然出现。通过大量的实验,我们观察到合作规范的成功出现是由代理的策略选择变化驱动的。超过90%的代理人平均选择合作。在随后的分析中,我们从q值、惩罚和元惩罚频率的变化角度探讨了合作规范产生的潜在原因。此外,我们研究了拓扑结构对参与者策略选择的影响,并评估了不同诱惑水平、人口规模和监管强度水平下规范的出现,以验证模型的敏感性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Q-learning facilitates norm emergence in metanorm game model with topological structures
Axelrod’s model and its subsequent studies have become a valuable framework for fostering cooperation norms among self-interested agents. Within this framework, the concepts of “boldness” and “vengefulness” are specifically employed to characterize agents’ behaviors in terms of cooperation and punishment (including metapunishment). Describing behavior solely through the parameters B and V may be overly simplistic and lacks generalizability, making it difficult to apply to other scenarios. Moreover, privacy concerns and the difficulty of evaluating complex states in real-world scenarios limit agents’ access to detailed payoff information from their neighbors. To address these questions, our paper employs self-regarding Q-learning, a well-established method for examining the dynamics of strategy updates and agents’ learning processes, to investigate whether metanorms can naturally emerge through players’ strategy selection. Through extensive experiments, we observe cooperative norms’ successful emergence driven by agents’ strategy selection variations. Over 90% of agents choose to cooperate on average. In subsequent analyses, we explore the underlying reasons for the emergence of cooperative norms from perspectives of changes in Q-values, punishment and metapunishment frequencies. Additionally, we examine the impact of topological structures on players’ strategy selection and assess the emergence of norms across different temptation levels, population sizes, and regulatory intensity levels to validate the model’s sensitivity.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Chaos Solitons & Fractals
Chaos Solitons & Fractals 物理-数学跨学科应用
CiteScore
13.20
自引率
10.30%
发文量
1087
审稿时长
9 months
期刊介绍: Chaos, Solitons & Fractals strives to establish itself as a premier journal in the interdisciplinary realm of Nonlinear Science, Non-equilibrium, and Complex Phenomena. It welcomes submissions covering a broad spectrum of topics within this field, including dynamics, non-equilibrium processes in physics, chemistry, and geophysics, complex matter and networks, mathematical models, computational biology, applications to quantum and mesoscopic phenomena, fluctuations and random processes, self-organization, and social phenomena.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信