Leveraging Initial Hints for Free in Stochastic Linear Bandits

Ashok Cutkosky, Christoph Dann, Abhimanyu Das, Qiuyi Zhang
{"title":"Leveraging Initial Hints for Free in Stochastic Linear Bandits","authors":"Ashok Cutkosky, Christoph Dann, Abhimanyu Das, Qiuyi Zhang","doi":"10.48550/arXiv.2203.04274","DOIUrl":null,"url":null,"abstract":"We study the setting of optimizing with bandit feedback with additional prior knowledge provided to the learner in the form of an initial hint of the optimal action. We present a novel algorithm for stochastic linear bandits that uses this hint to improve its regret to $\\tilde O(\\sqrt{T})$ when the hint is accurate, while maintaining a minimax-optimal $\\tilde O(d\\sqrt{T})$ regret independent of the quality of the hint. Furthermore, we provide a Pareto frontier of tight tradeoffs between best-case and worst-case regret, with matching lower bounds. Perhaps surprisingly, our work shows that leveraging a hint shows provable gains without sacrificing worst-case performance, implying that our algorithm adapts to the quality of the hint for free. We also provide an extension of our algorithm to the case of $m$ initial hints, showing that we can achieve a $\\tilde O(m^{2/3}\\sqrt{T})$ regret.","PeriodicalId":267197,"journal":{"name":"International Conference on Algorithmic Learning Theory","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Algorithmic Learning Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2203.04274","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

We study the setting of optimizing with bandit feedback with additional prior knowledge provided to the learner in the form of an initial hint of the optimal action. We present a novel algorithm for stochastic linear bandits that uses this hint to improve its regret to $\tilde O(\sqrt{T})$ when the hint is accurate, while maintaining a minimax-optimal $\tilde O(d\sqrt{T})$ regret independent of the quality of the hint. Furthermore, we provide a Pareto frontier of tight tradeoffs between best-case and worst-case regret, with matching lower bounds. Perhaps surprisingly, our work shows that leveraging a hint shows provable gains without sacrificing worst-case performance, implying that our algorithm adapts to the quality of the hint for free. We also provide an extension of our algorithm to the case of $m$ initial hints, showing that we can achieve a $\tilde O(m^{2/3}\sqrt{T})$ regret.
利用随机线性强盗的初始提示
我们研究了强盗反馈优化的设置,并以最优行为的初始提示的形式向学习者提供了额外的先验知识。我们提出了一种新的随机线性匪徒算法,当提示准确时,该算法使用该提示将后悔提高到$\tilde O(\sqrt{T})$,同时保持与提示质量无关的最小最优后悔$\tilde O(d\sqrt{T})$。此外,我们提供了一个帕累托边界,在最佳情况和最坏情况后悔之间的严格权衡,具有匹配的下界。也许令人惊讶的是,我们的工作表明,利用提示可以在不牺牲最坏情况性能的情况下获得可证明的收益,这意味着我们的算法可以免费适应提示的质量。我们还提供了对$m$初始提示情况的算法扩展,表明我们可以实现$\tilde O(m^{2/3}\sqrt{T})$遗憾。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信