Trend Detection Based Regret Minimization for Bandit Problems

Paresh Nakhe, Rebecca Reiffenhäuser
{"title":"Trend Detection Based Regret Minimization for Bandit Problems","authors":"Paresh Nakhe, Rebecca Reiffenhäuser","doi":"10.1109/DSAA.2016.35","DOIUrl":null,"url":null,"abstract":"We study a variation of the classical multi-armed bandits problem. In this problem, the learner has to make a sequence of decisions, picking from a fixed set of choices. In each round, she receives as feedback only the loss incurred from the chosen action. Conventionally, this problem has been studied when losses of the actions are drawn from an unknown distribution or when they are adversarial. In this paper, we study this problem when the losses of the actions also satisfy certain structural properties, and especially, do show a trend structure. When this is true, we show that using trend detection, we can achieve regret of order Õ (N √TK) with respect to a switching strategy for the version of the problem where a single action is chosen in each round and Õ (Nm √TK) when m actions are chosen each round. This guarantee is a significant improvement over the conventional benchmark. Our approach can, as a framework, be applied in combination with various well-known bandit algorithms, like Exp3. For both versions of the problem, we give regret guarantees also for the anytime setting, i.e. when length of the choice-sequence is not known in advance. Finally, we pinpoint the advantages of our method by comparing it to some well-known other strategies.","PeriodicalId":193885,"journal":{"name":"2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSAA.2016.35","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We study a variation of the classical multi-armed bandits problem. In this problem, the learner has to make a sequence of decisions, picking from a fixed set of choices. In each round, she receives as feedback only the loss incurred from the chosen action. Conventionally, this problem has been studied when losses of the actions are drawn from an unknown distribution or when they are adversarial. In this paper, we study this problem when the losses of the actions also satisfy certain structural properties, and especially, do show a trend structure. When this is true, we show that using trend detection, we can achieve regret of order Õ (N √TK) with respect to a switching strategy for the version of the problem where a single action is chosen in each round and Õ (Nm √TK) when m actions are chosen each round. This guarantee is a significant improvement over the conventional benchmark. Our approach can, as a framework, be applied in combination with various well-known bandit algorithms, like Exp3. For both versions of the problem, we give regret guarantees also for the anytime setting, i.e. when length of the choice-sequence is not known in advance. Finally, we pinpoint the advantages of our method by comparing it to some well-known other strategies.
基于趋势检测的强盗问题后悔最小化
本文研究了经典多臂土匪问题的一种变体。在这个问题中,学习者必须做出一系列决定,从一组固定的选择中进行选择。在每一轮中,她只会收到选择行动所造成的损失的反馈。传统上,当行动的损失来自未知分布或它们是对抗性的时候,这个问题已经被研究过了。本文研究了当动作的损失也满足一定的结构性质,特别是表现为趋势结构时的问题。当这是正确的,我们表明使用趋势检测,我们可以在每轮选择一个动作的问题版本的切换策略中实现Õ (N√TK)阶的遗憾,在每轮选择m个动作的问题版本中实现Õ (Nm√TK)阶的遗憾。这种保证是对传统基准的重大改进。我们的方法可以作为一个框架,与各种知名的强盗算法(如Exp3)结合应用。对于这两个版本的问题,我们也给出了任何时间设置的遗憾保证,即当选择序列的长度事先未知时。最后,通过与其他一些知名策略的比较,我们指出了我们的方法的优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信