{"title":"动态零和游戏中的近似","authors":"M. Tidball, E. Altman","doi":"10.1137/s036301299325534x","DOIUrl":null,"url":null,"abstract":"We develop a unifying approach for approximating a ``limit\" zero-sum game by a sequence of approximating games. We discuss both the convergence of the values and the convergence of optimal (or ``almost\" optimal) strategies. Moreover, based on optimal policies for the limit game, we construct policies which are almost optimal for the approximating games. We then apply the general framework to state approximations of stochastic games, to convergence of finite horizon problems to infinite horizon problems, to convergence in the discount factor and in the immediate reward.","PeriodicalId":280771,"journal":{"name":"Game Theory and Information","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":"{\"title\":\"Approximations In Dynamic Zero-Sum Games\",\"authors\":\"M. Tidball, E. Altman\",\"doi\":\"10.1137/s036301299325534x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We develop a unifying approach for approximating a ``limit\\\" zero-sum game by a sequence of approximating games. We discuss both the convergence of the values and the convergence of optimal (or ``almost\\\" optimal) strategies. Moreover, based on optimal policies for the limit game, we construct policies which are almost optimal for the approximating games. We then apply the general framework to state approximations of stochastic games, to convergence of finite horizon problems to infinite horizon problems, to convergence in the discount factor and in the immediate reward.\",\"PeriodicalId\":280771,\"journal\":{\"name\":\"Game Theory and Information\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"15\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Game Theory and Information\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1137/s036301299325534x\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Game Theory and Information","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1137/s036301299325534x","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
We develop a unifying approach for approximating a ``limit" zero-sum game by a sequence of approximating games. We discuss both the convergence of the values and the convergence of optimal (or ``almost" optimal) strategies. Moreover, based on optimal policies for the limit game, we construct policies which are almost optimal for the approximating games. We then apply the general framework to state approximations of stochastic games, to convergence of finite horizon problems to infinite horizon problems, to convergence in the discount factor and in the immediate reward.