Multi-policy posterior sampling for restless Markov bandits

Suleman Alnatheer, H. Man
{"title":"Multi-policy posterior sampling for restless Markov bandits","authors":"Suleman Alnatheer, H. Man","doi":"10.1109/GlobalSIP.2014.7032327","DOIUrl":null,"url":null,"abstract":"This paper considers Multi-Arms Restless Bandits problem, where each arm have time varying rewards generated from unknown two-states discrete time Markov process. Each chain is assumed irreducible, aperiodic, and non-reactive to agent actions. Optimal solution or constant value approximation to all instances of Restless Bandits problem does not exist; in fact it has been proven to be intractable even if all parameters were deterministic. A polynomial time algorithm is proposed that learns transitional parameters for each arm and selects the perceived optimal policy from a set of predefined policies using a beliefs or probability distributions. More precisely, the proposed algorithm compares mean rewards of consistently staying with best perceived arm to means rewards of Myopically accessed combination of arms using randomized probability matching or better known as Thompson Sampling. Empirical evaluations are presented at the end of the paper that show an improve performance in all instances of the problem compared to other existing algorithms except a small set of instances where arms are similar and bursty.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GlobalSIP.2014.7032327","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

This paper considers Multi-Arms Restless Bandits problem, where each arm have time varying rewards generated from unknown two-states discrete time Markov process. Each chain is assumed irreducible, aperiodic, and non-reactive to agent actions. Optimal solution or constant value approximation to all instances of Restless Bandits problem does not exist; in fact it has been proven to be intractable even if all parameters were deterministic. A polynomial time algorithm is proposed that learns transitional parameters for each arm and selects the perceived optimal policy from a set of predefined policies using a beliefs or probability distributions. More precisely, the proposed algorithm compares mean rewards of consistently staying with best perceived arm to means rewards of Myopically accessed combination of arms using randomized probability matching or better known as Thompson Sampling. Empirical evaluations are presented at the end of the paper that show an improve performance in all instances of the problem compared to other existing algorithms except a small set of instances where arms are similar and bursty.
不安马尔可夫强盗的多策略后验抽样
本文研究了多臂不动盗匪问题,该问题中每个臂都有由未知的两状态离散时间马尔可夫过程产生的时变奖励。每个链被假定为不可约的、非周期性的、对药剂作用无反应的。不存在所有不宁土匪问题的最优解或常值逼近;事实上,即使所有参数都是确定的,它也被证明是难以处理的。提出了一种多项式时间算法,该算法学习每个臂的过渡参数,并使用信念或概率分布从一组预定义策略中选择感知到的最优策略。更准确地说,所提出的算法使用随机概率匹配或更广为人知的汤普森抽样,将始终保持最佳感知臂的平均奖励与近视接触臂组合的平均奖励进行比较。论文的最后给出了经验评估,与其他现有算法相比,在所有问题的实例中,除了一小部分实例中手臂相似且爆裂外,该算法的性能都有所提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信