吸收型 MDP 的最小权重学习

IF 1.2 3区 数学 Q2 STATISTICS & PROBABILITY
Fengying Li, Yuqiang Li, Xianyi Wu
{"title":"吸收型 MDP 的最小权重学习","authors":"Fengying Li, Yuqiang Li, Xianyi Wu","doi":"10.1007/s00362-023-01491-4","DOIUrl":null,"url":null,"abstract":"<p>Reinforcement learning policy evaluation problems are often modeled as finite or discounted/averaged infinite-horizon Markov Decision Processes (MDPs). In this paper, we study undiscounted off-policy evaluation for absorbing MDPs. Given the dataset consisting of i.i.d episodes under a given truncation level, we propose an algorithm (referred to as MWLA in the text) to directly estimate the expected return via the importance ratio of the state-action occupancy measure. The Mean Square Error (MSE) bound of the MWLA method is provided and the dependence of statistical errors on the data size and the truncation level are analyzed. The performance of the algorithm is illustrated by means of computational experiments under an episodic taxi environment</p>","PeriodicalId":51166,"journal":{"name":"Statistical Papers","volume":"43 1","pages":""},"PeriodicalIF":1.2000,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Minimax weight learning for absorbing MDPs\",\"authors\":\"Fengying Li, Yuqiang Li, Xianyi Wu\",\"doi\":\"10.1007/s00362-023-01491-4\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Reinforcement learning policy evaluation problems are often modeled as finite or discounted/averaged infinite-horizon Markov Decision Processes (MDPs). In this paper, we study undiscounted off-policy evaluation for absorbing MDPs. Given the dataset consisting of i.i.d episodes under a given truncation level, we propose an algorithm (referred to as MWLA in the text) to directly estimate the expected return via the importance ratio of the state-action occupancy measure. The Mean Square Error (MSE) bound of the MWLA method is provided and the dependence of statistical errors on the data size and the truncation level are analyzed. The performance of the algorithm is illustrated by means of computational experiments under an episodic taxi environment</p>\",\"PeriodicalId\":51166,\"journal\":{\"name\":\"Statistical Papers\",\"volume\":\"43 1\",\"pages\":\"\"},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2024-03-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Statistical Papers\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1007/s00362-023-01491-4\",\"RegionNum\":3,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"STATISTICS & PROBABILITY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Statistical Papers","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1007/s00362-023-01491-4","RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"STATISTICS & PROBABILITY","Score":null,"Total":0}
引用次数: 0

摘要

强化学习政策评估问题通常被建模为有限或贴现/平均无限视距马尔可夫决策过程(MDP)。在本文中,我们将研究吸收型 MDP 的未贴现非策略评估。给定截断水平下的数据集由 i.i.d 事件组成,我们提出了一种算法(文中称为 MWLA),通过状态-行动占用度量的重要性比直接估计预期收益。我们提供了 MWLA 方法的均方误差(MSE)边界,并分析了统计误差对数据规模和截断水平的依赖性。通过在偶发出租车环境下的计算实验,说明了该算法的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Minimax weight learning for absorbing MDPs

Minimax weight learning for absorbing MDPs

Reinforcement learning policy evaluation problems are often modeled as finite or discounted/averaged infinite-horizon Markov Decision Processes (MDPs). In this paper, we study undiscounted off-policy evaluation for absorbing MDPs. Given the dataset consisting of i.i.d episodes under a given truncation level, we propose an algorithm (referred to as MWLA in the text) to directly estimate the expected return via the importance ratio of the state-action occupancy measure. The Mean Square Error (MSE) bound of the MWLA method is provided and the dependence of statistical errors on the data size and the truncation level are analyzed. The performance of the algorithm is illustrated by means of computational experiments under an episodic taxi environment

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Statistical Papers
Statistical Papers 数学-统计学与概率论
CiteScore
2.80
自引率
7.70%
发文量
95
审稿时长
6-12 weeks
期刊介绍: The journal Statistical Papers addresses itself to all persons and organizations that have to deal with statistical methods in their own field of work. It attempts to provide a forum for the presentation and critical assessment of statistical methods, in particular for the discussion of their methodological foundations as well as their potential applications. Methods that have broad applications will be preferred. However, special attention is given to those statistical methods which are relevant to the economic and social sciences. In addition to original research papers, readers will find survey articles, short notes, reports on statistical software, problem section, and book reviews.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信