Evolutionary game dynamics of multi-agent cooperation driven by self-learning

Jinming Du, Bin Wu, Long Wang
{"title":"Evolutionary game dynamics of multi-agent cooperation driven by self-learning","authors":"Jinming Du, Bin Wu, Long Wang","doi":"10.1109/ASCC.2013.6606032","DOIUrl":null,"url":null,"abstract":"Multi-agent cooperation problem is a fundamental issue in the coordination control field. Individuals achieve a common task through association with others or division of labor. Evolutionary game dynamics offers a basic framework to investigate how agents self-adaptively switch their strategies in accordance with various targets, and also the evolution of their behaviors. In this paper, we analytically study the strategy evolution in a multiple player game model driven by self-learning. Self-learning dynamics is of importance for agent strategy updating yet seldom analytically addressed before. It is based on self-evaluation, which applies to distributed control. We focus on the abundance of different strategies (behaviors of agents) and their oscillation (frequency of behavior switching). We arrive at the condition under which a strategy is more abundant over the other under weak selection limit. Such condition holds for any finite population size of N ≥ 3, thus it fits for the systems with finite agents, which has notable advantage over that of pairwise comparison process. At certain states of evolutionary stable state, there exists “ping-pong effect” with stable frequency, which is not affected by aspirations. Our results indicate that self-learning dynamics of multi-player games has special characters. Compared with pairwise comparison dynamics and Moran process, it shows different effect on strategy evolution, such as promoting cooperation in collective risk games with large threshold.","PeriodicalId":6304,"journal":{"name":"2013 9th Asian Control Conference (ASCC)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2013-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 9th Asian Control Conference (ASCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASCC.2013.6606032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

Abstract

Multi-agent cooperation problem is a fundamental issue in the coordination control field. Individuals achieve a common task through association with others or division of labor. Evolutionary game dynamics offers a basic framework to investigate how agents self-adaptively switch their strategies in accordance with various targets, and also the evolution of their behaviors. In this paper, we analytically study the strategy evolution in a multiple player game model driven by self-learning. Self-learning dynamics is of importance for agent strategy updating yet seldom analytically addressed before. It is based on self-evaluation, which applies to distributed control. We focus on the abundance of different strategies (behaviors of agents) and their oscillation (frequency of behavior switching). We arrive at the condition under which a strategy is more abundant over the other under weak selection limit. Such condition holds for any finite population size of N ≥ 3, thus it fits for the systems with finite agents, which has notable advantage over that of pairwise comparison process. At certain states of evolutionary stable state, there exists “ping-pong effect” with stable frequency, which is not affected by aspirations. Our results indicate that self-learning dynamics of multi-player games has special characters. Compared with pairwise comparison dynamics and Moran process, it shows different effect on strategy evolution, such as promoting cooperation in collective risk games with large threshold.
自学习驱动下多智能体合作的进化博弈动力学
多智能体协作问题是协调控制领域的一个基础性问题。个体通过与其他人的联系或分工来完成共同的任务。进化博弈动力学提供了一个基本框架来研究代理如何根据不同的目标自适应地改变策略,以及他们的行为进化。本文对自学习驱动的多人博弈模型中的策略演化进行了分析研究。自学习动态对智能体策略更新具有重要意义,但以往很少对其进行分析。该方法基于自评价,适用于分布式控制。我们关注不同策略(agent的行为)的丰度及其振荡(行为切换的频率)。在弱选择极限下,我们得到一种策略比另一种策略更丰富的条件。该条件对N≥3的有限总体大小都成立,因此适用于具有有限智能体的系统,与两两比较过程相比具有显著的优势。在进化稳定状态的某些状态下,存在频率稳定的“乒乓效应”,不受愿望的影响。研究结果表明,多人游戏的自学习动态具有其特殊性。与两两比较动态和Moran过程相比,在大阈值集体风险博弈中,它对策略演化的促进作用不同。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信