New Aspects of Black Box Conditional Gradient: Variance Reduction and One Point Feedback

Andrey Veprikov, Alexander Bogdanov, Vladislav Minashkin, Alexander Beznosikov
{"title":"New Aspects of Black Box Conditional Gradient: Variance Reduction and One Point Feedback","authors":"Andrey Veprikov, Alexander Bogdanov, Vladislav Minashkin, Alexander Beznosikov","doi":"arxiv-2409.10442","DOIUrl":null,"url":null,"abstract":"This paper deals with the black-box optimization problem. In this setup, we\ndo not have access to the gradient of the objective function, therefore, we\nneed to estimate it somehow. We propose a new type of approximation JAGUAR,\nthat memorizes information from previous iterations and requires\n$\\mathcal{O}(1)$ oracle calls. We implement this approximation in the\nFrank-Wolfe and Gradient Descent algorithms and prove the convergence of these\nmethods with different types of zero-order oracle. Our theoretical analysis\ncovers scenarios of non-convex, convex and PL-condition cases. Also in this\npaper, we consider the stochastic minimization problem on the set $Q$ with\nnoise in the zero-order oracle; this setup is quite unpopular in the\nliterature, but we prove that the JAGUAR approximation is robust not only in\ndeterministic minimization problems, but also in the stochastic case. We\nperform experiments to compare our gradient estimator with those already known\nin the literature and confirm the dominance of our methods.","PeriodicalId":501286,"journal":{"name":"arXiv - MATH - Optimization and Control","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - MATH - Optimization and Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10442","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper deals with the black-box optimization problem. In this setup, we do not have access to the gradient of the objective function, therefore, we need to estimate it somehow. We propose a new type of approximation JAGUAR, that memorizes information from previous iterations and requires $\mathcal{O}(1)$ oracle calls. We implement this approximation in the Frank-Wolfe and Gradient Descent algorithms and prove the convergence of these methods with different types of zero-order oracle. Our theoretical analysis covers scenarios of non-convex, convex and PL-condition cases. Also in this paper, we consider the stochastic minimization problem on the set $Q$ with noise in the zero-order oracle; this setup is quite unpopular in the literature, but we prove that the JAGUAR approximation is robust not only in deterministic minimization problems, but also in the stochastic case. We perform experiments to compare our gradient estimator with those already known in the literature and confirm the dominance of our methods.
黑箱条件梯度的新方面:方差缩小和一点反馈
本文讨论的是黑箱优化问题。在这种情况下,我们无法获得目标函数的梯度,因此需要以某种方式对其进行估计。我们提出了一种新的近似方法 JAGUAR,它可以记忆之前迭代的信息,并且只需要调用 $/mathcal{O}(1)$ 神谕。我们在弗兰克-沃尔夫算法和梯度下降算法中实现了这种近似方法,并证明了这些方法在不同类型的零阶神谕下的收敛性。我们的理论分析涵盖了非凸、凸和 PL 条件的情况。在本文中,我们还考虑了零阶甲骨文中带有噪声的集合 $Q$ 上的随机最小化问题;这种设置在文献中很不流行,但我们证明了 JAGUAR 近似不仅在不确定最小化问题上,而且在随机情况下都是稳健的。我们进行了实验,将我们的梯度估计器与文献中已知的梯度估计器进行了比较,并证实了我们方法的优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信