双声誉激励机制下q学习驱动的合作进化

IF 3.4 2区 数学 Q1 MATHEMATICS, APPLIED
Qianwei Zhang, Xinran Zhang
{"title":"双声誉激励机制下q学习驱动的合作进化","authors":"Qianwei Zhang,&nbsp;Xinran Zhang","doi":"10.1016/j.amc.2025.129590","DOIUrl":null,"url":null,"abstract":"<div><div>Reinforcement learning, as a powerful framework for analyzing strategic dynamics in evolutionary games, has gained significant traction in game theory research. In this study, we propose a dual-reputation incentive mechanism that integrates individual and group reputation metrics within the spatial Prisoner's Dilemma paradigm, aiming to elucidate how adaptive Q-learning drives the evolution of cooperation. Our approach combines traditional game payoffs with reputation-based rewards through a novel Q-learning reward function, strategically decomposing reputation into two components: individual rewards (quantifying an agent’s behavioral history) and group rewards (reflecting the collective reputation of their local neighborhood). Simulations demonstrate that when individual reputation rewards are prioritized, agents optimize long-term gains by dynamically adjusting strategies under strong motivational incentives, which ultimately enhances global cooperation levels. Microscopic analysis reveals that individual reputation incentives promote high-density cooperator clusters and facilitate cooperative behavior propagation. Furthermore, when a high weight is assigned to individual reputation rewards, evolutionary analysis demonstrates that cooperative Q-values consistently exceeds defective ones, indicating the emergence of cooperation as an evolutionarily stable strategy. This research provides theoretical insights for designing reputation-aware reinforcement learning systems to foster cooperation in real-world social dilemmas.</div></div>","PeriodicalId":55496,"journal":{"name":"Applied Mathematics and Computation","volume":"507 ","pages":"Article 129590"},"PeriodicalIF":3.4000,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Q-learning driven cooperative evolution with dual-reputation incentive mechanisms\",\"authors\":\"Qianwei Zhang,&nbsp;Xinran Zhang\",\"doi\":\"10.1016/j.amc.2025.129590\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Reinforcement learning, as a powerful framework for analyzing strategic dynamics in evolutionary games, has gained significant traction in game theory research. In this study, we propose a dual-reputation incentive mechanism that integrates individual and group reputation metrics within the spatial Prisoner's Dilemma paradigm, aiming to elucidate how adaptive Q-learning drives the evolution of cooperation. Our approach combines traditional game payoffs with reputation-based rewards through a novel Q-learning reward function, strategically decomposing reputation into two components: individual rewards (quantifying an agent’s behavioral history) and group rewards (reflecting the collective reputation of their local neighborhood). Simulations demonstrate that when individual reputation rewards are prioritized, agents optimize long-term gains by dynamically adjusting strategies under strong motivational incentives, which ultimately enhances global cooperation levels. Microscopic analysis reveals that individual reputation incentives promote high-density cooperator clusters and facilitate cooperative behavior propagation. Furthermore, when a high weight is assigned to individual reputation rewards, evolutionary analysis demonstrates that cooperative Q-values consistently exceeds defective ones, indicating the emergence of cooperation as an evolutionarily stable strategy. This research provides theoretical insights for designing reputation-aware reinforcement learning systems to foster cooperation in real-world social dilemmas.</div></div>\",\"PeriodicalId\":55496,\"journal\":{\"name\":\"Applied Mathematics and Computation\",\"volume\":\"507 \",\"pages\":\"Article 129590\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Mathematics and Computation\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0096300325003169\",\"RegionNum\":2,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Mathematics and Computation","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0096300325003169","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0

摘要

强化学习作为分析进化博弈中策略动力学的一个强有力的框架,在博弈论研究中得到了重要的关注。在空间囚徒困境范式中,我们提出了一个整合个人和群体声誉指标的双声誉激励机制,旨在阐明适应性q学习如何驱动合作进化。我们的方法通过新颖的Q-learning奖励功能将传统的游戏收益与基于声誉的奖励结合起来,策略性地将声誉分解为两个组成部分:个人奖励(量化代理的行为历史)和群体奖励(反映其当地社区的集体声誉)。仿真结果表明,当个体声誉奖励优先时,在强激励激励下,agent通过动态调整策略来优化长期收益,最终提高全局合作水平。微观分析表明,个体声誉激励促进了高密度的合作者集群,促进了合作行为的传播。此外,当个体声誉奖励被赋予较高权重时,进化分析表明合作q值始终超过缺陷q值,表明合作作为一种进化稳定策略的出现。本研究为设计声誉感知强化学习系统以促进现实社会困境中的合作提供了理论见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Q-learning driven cooperative evolution with dual-reputation incentive mechanisms
Reinforcement learning, as a powerful framework for analyzing strategic dynamics in evolutionary games, has gained significant traction in game theory research. In this study, we propose a dual-reputation incentive mechanism that integrates individual and group reputation metrics within the spatial Prisoner's Dilemma paradigm, aiming to elucidate how adaptive Q-learning drives the evolution of cooperation. Our approach combines traditional game payoffs with reputation-based rewards through a novel Q-learning reward function, strategically decomposing reputation into two components: individual rewards (quantifying an agent’s behavioral history) and group rewards (reflecting the collective reputation of their local neighborhood). Simulations demonstrate that when individual reputation rewards are prioritized, agents optimize long-term gains by dynamically adjusting strategies under strong motivational incentives, which ultimately enhances global cooperation levels. Microscopic analysis reveals that individual reputation incentives promote high-density cooperator clusters and facilitate cooperative behavior propagation. Furthermore, when a high weight is assigned to individual reputation rewards, evolutionary analysis demonstrates that cooperative Q-values consistently exceeds defective ones, indicating the emergence of cooperation as an evolutionarily stable strategy. This research provides theoretical insights for designing reputation-aware reinforcement learning systems to foster cooperation in real-world social dilemmas.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
7.90
自引率
10.00%
发文量
755
审稿时长
36 days
期刊介绍: Applied Mathematics and Computation addresses work at the interface between applied mathematics, numerical computation, and applications of systems – oriented ideas to the physical, biological, social, and behavioral sciences, and emphasizes papers of a computational nature focusing on new algorithms, their analysis and numerical results. In addition to presenting research papers, Applied Mathematics and Computation publishes review articles and single–topics issues.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信