{"title":"双声誉激励机制下q学习驱动的合作进化","authors":"Qianwei Zhang, Xinran Zhang","doi":"10.1016/j.amc.2025.129590","DOIUrl":null,"url":null,"abstract":"<div><div>Reinforcement learning, as a powerful framework for analyzing strategic dynamics in evolutionary games, has gained significant traction in game theory research. In this study, we propose a dual-reputation incentive mechanism that integrates individual and group reputation metrics within the spatial Prisoner's Dilemma paradigm, aiming to elucidate how adaptive Q-learning drives the evolution of cooperation. Our approach combines traditional game payoffs with reputation-based rewards through a novel Q-learning reward function, strategically decomposing reputation into two components: individual rewards (quantifying an agent’s behavioral history) and group rewards (reflecting the collective reputation of their local neighborhood). Simulations demonstrate that when individual reputation rewards are prioritized, agents optimize long-term gains by dynamically adjusting strategies under strong motivational incentives, which ultimately enhances global cooperation levels. Microscopic analysis reveals that individual reputation incentives promote high-density cooperator clusters and facilitate cooperative behavior propagation. Furthermore, when a high weight is assigned to individual reputation rewards, evolutionary analysis demonstrates that cooperative Q-values consistently exceeds defective ones, indicating the emergence of cooperation as an evolutionarily stable strategy. This research provides theoretical insights for designing reputation-aware reinforcement learning systems to foster cooperation in real-world social dilemmas.</div></div>","PeriodicalId":55496,"journal":{"name":"Applied Mathematics and Computation","volume":"507 ","pages":"Article 129590"},"PeriodicalIF":3.4000,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Q-learning driven cooperative evolution with dual-reputation incentive mechanisms\",\"authors\":\"Qianwei Zhang, Xinran Zhang\",\"doi\":\"10.1016/j.amc.2025.129590\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Reinforcement learning, as a powerful framework for analyzing strategic dynamics in evolutionary games, has gained significant traction in game theory research. In this study, we propose a dual-reputation incentive mechanism that integrates individual and group reputation metrics within the spatial Prisoner's Dilemma paradigm, aiming to elucidate how adaptive Q-learning drives the evolution of cooperation. Our approach combines traditional game payoffs with reputation-based rewards through a novel Q-learning reward function, strategically decomposing reputation into two components: individual rewards (quantifying an agent’s behavioral history) and group rewards (reflecting the collective reputation of their local neighborhood). Simulations demonstrate that when individual reputation rewards are prioritized, agents optimize long-term gains by dynamically adjusting strategies under strong motivational incentives, which ultimately enhances global cooperation levels. Microscopic analysis reveals that individual reputation incentives promote high-density cooperator clusters and facilitate cooperative behavior propagation. Furthermore, when a high weight is assigned to individual reputation rewards, evolutionary analysis demonstrates that cooperative Q-values consistently exceeds defective ones, indicating the emergence of cooperation as an evolutionarily stable strategy. This research provides theoretical insights for designing reputation-aware reinforcement learning systems to foster cooperation in real-world social dilemmas.</div></div>\",\"PeriodicalId\":55496,\"journal\":{\"name\":\"Applied Mathematics and Computation\",\"volume\":\"507 \",\"pages\":\"Article 129590\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Mathematics and Computation\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0096300325003169\",\"RegionNum\":2,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Mathematics and Computation","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0096300325003169","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
Q-learning driven cooperative evolution with dual-reputation incentive mechanisms
Reinforcement learning, as a powerful framework for analyzing strategic dynamics in evolutionary games, has gained significant traction in game theory research. In this study, we propose a dual-reputation incentive mechanism that integrates individual and group reputation metrics within the spatial Prisoner's Dilemma paradigm, aiming to elucidate how adaptive Q-learning drives the evolution of cooperation. Our approach combines traditional game payoffs with reputation-based rewards through a novel Q-learning reward function, strategically decomposing reputation into two components: individual rewards (quantifying an agent’s behavioral history) and group rewards (reflecting the collective reputation of their local neighborhood). Simulations demonstrate that when individual reputation rewards are prioritized, agents optimize long-term gains by dynamically adjusting strategies under strong motivational incentives, which ultimately enhances global cooperation levels. Microscopic analysis reveals that individual reputation incentives promote high-density cooperator clusters and facilitate cooperative behavior propagation. Furthermore, when a high weight is assigned to individual reputation rewards, evolutionary analysis demonstrates that cooperative Q-values consistently exceeds defective ones, indicating the emergence of cooperation as an evolutionarily stable strategy. This research provides theoretical insights for designing reputation-aware reinforcement learning systems to foster cooperation in real-world social dilemmas.
期刊介绍:
Applied Mathematics and Computation addresses work at the interface between applied mathematics, numerical computation, and applications of systems – oriented ideas to the physical, biological, social, and behavioral sciences, and emphasizes papers of a computational nature focusing on new algorithms, their analysis and numerical results.
In addition to presenting research papers, Applied Mathematics and Computation publishes review articles and single–topics issues.