{"title":"具有小突变的模仿过程","authors":"D. Fudenberg, L. Imhof","doi":"10.2139/ssrn.619203","DOIUrl":null,"url":null,"abstract":"This note characterizes the impact of adding rare stochastic mutations to an “imitation dynamic,†meaning a process with the properties that absent strategies remain absent, and non-homogeneous states are transient. The resulting system will spend almost all of its time at the absorbing states of the no-mutation process. The work of Freidlin and Wentzell [Random Perturbations of Dynamical Systems, Springer, New York, 1984] and its extensions provide a general algorithm for calculating the limit distribution, but this algorithm can be complicated to apply. This note provides a simpler and more intuitive algorithm. Loosely speaking, in a process with K strategies, it is sufficient to find the invariant distribution of a KA—K Markov matrix on the K homogeneous states, where the probability of a transit from “all play i†to “all play j†is the probability of a transition from the state “all agents but 1 play i, 1 plays j†to the state “all play j†.","PeriodicalId":221813,"journal":{"name":"Harvard Economics Department Working Paper Series","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"283","resultStr":"{\"title\":\"Imitation Processes with Small Mutations\",\"authors\":\"D. Fudenberg, L. Imhof\",\"doi\":\"10.2139/ssrn.619203\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This note characterizes the impact of adding rare stochastic mutations to an “imitation dynamic,†meaning a process with the properties that absent strategies remain absent, and non-homogeneous states are transient. The resulting system will spend almost all of its time at the absorbing states of the no-mutation process. The work of Freidlin and Wentzell [Random Perturbations of Dynamical Systems, Springer, New York, 1984] and its extensions provide a general algorithm for calculating the limit distribution, but this algorithm can be complicated to apply. This note provides a simpler and more intuitive algorithm. Loosely speaking, in a process with K strategies, it is sufficient to find the invariant distribution of a KA—K Markov matrix on the K homogeneous states, where the probability of a transit from “all play i†to “all play j†is the probability of a transition from the state “all agents but 1 play i, 1 plays j†to the state “all play j†.\",\"PeriodicalId\":221813,\"journal\":{\"name\":\"Harvard Economics Department Working Paper Series\",\"volume\":\"42 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2004-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"283\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Harvard Economics Department Working Paper Series\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2139/ssrn.619203\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Harvard Economics Department Working Paper Series","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.619203","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 283
摘要
本文描述了将罕见的随机突变添加到一个€œimitation动态过程中的影响,这意味着一个具有不存在策略的过程仍然不存在,非均匀状态是短暂的。由此产生的系统将花费几乎所有的时间在无突变过程的吸收状态。Freidlin和Wentzell [Random Perturbations of Dynamical Systems, Springer, New York, 1984]的工作及其扩展提供了一种计算极限分布的通用算法,但该算法的应用可能比较复杂。本笔记提供了一个更简单、更直观的算法。粗略地说,在一个有K个策略的过程中,在K个齐次状态上找到KA-K马尔可夫矩阵的不变分布就足够了,其中从€œall play i€到€œall play j€的过渡概率就是从状态为€œall的智能体但1个玩i, 1个玩j€到状态为€œall play j€的过渡概率。
This note characterizes the impact of adding rare stochastic mutations to an “imitation dynamic,†meaning a process with the properties that absent strategies remain absent, and non-homogeneous states are transient. The resulting system will spend almost all of its time at the absorbing states of the no-mutation process. The work of Freidlin and Wentzell [Random Perturbations of Dynamical Systems, Springer, New York, 1984] and its extensions provide a general algorithm for calculating the limit distribution, but this algorithm can be complicated to apply. This note provides a simpler and more intuitive algorithm. Loosely speaking, in a process with K strategies, it is sufficient to find the invariant distribution of a KA—K Markov matrix on the K homogeneous states, where the probability of a transit from “all play i†to “all play j†is the probability of a transition from the state “all agents but 1 play i, 1 plays j†to the state “all play j†.