On asymptotic value for dynamic games with saddle point

D. Khlopin
{"title":"On asymptotic value for dynamic games with saddle point","authors":"D. Khlopin","doi":"10.1137/1.9781611974072.39","DOIUrl":null,"url":null,"abstract":"The paper is concerned with two-person games with saddle point. We investigate the limits of value functions for long-time-average payoff, discounted average payoff, and the payoff that follows a probability density. \nMost of our assumptions restrict the dynamics of games. In particular, we assume the closedness of strategies under concatenation. It is also necessary for the value function to satisfy Bellman's optimality principle, even if in a weakened, asymptotic sense. \nWe provide two results. The first one is a uniform Tauber result for games: if the value functions for long-time-average payoff converge uniformly, then there exists the uniform limit for probability densities from a sufficiently broad set; moreover, these limits coincide. The second one is the uniform Abel result: if a uniform limit for self-similar densities exists, then the uniform limit for long-time average payoff also exists, and they coincide.","PeriodicalId":193106,"journal":{"name":"SIAM Conf. on Control and its Applications","volume":"57 3","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM Conf. on Control and its Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1137/1.9781611974072.39","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

The paper is concerned with two-person games with saddle point. We investigate the limits of value functions for long-time-average payoff, discounted average payoff, and the payoff that follows a probability density. Most of our assumptions restrict the dynamics of games. In particular, we assume the closedness of strategies under concatenation. It is also necessary for the value function to satisfy Bellman's optimality principle, even if in a weakened, asymptotic sense. We provide two results. The first one is a uniform Tauber result for games: if the value functions for long-time-average payoff converge uniformly, then there exists the uniform limit for probability densities from a sufficiently broad set; moreover, these limits coincide. The second one is the uniform Abel result: if a uniform limit for self-similar densities exists, then the uniform limit for long-time average payoff also exists, and they coincide.
鞍点动态对策的渐近值
本文研究具有鞍点的二人对策。我们研究了长期平均收益、贴现平均收益和遵循概率密度的收益的值函数的极限。我们的大多数假设都限制了游戏的动态。特别地,我们假定了策略在串联下的紧密性。值函数也必须满足Bellman最优性原则,即使是在弱化的渐近意义上。我们提供了两个结果。第一个是博弈的一致Tauber结果:如果长时间平均收益的值函数一致收敛,那么在足够广泛的集合中存在概率密度的一致极限;此外,这些限制是一致的。二是均匀Abel结果:如果存在自相似密度的均匀极限,则长期平均收益的均匀极限也存在,并且它们重合。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信