Andrey Veprikov , Alexander Bogdanov , Vladislav Minashkin , Aleksandr Beznosikov
{"title":"黑箱条件梯度的新方面:方差缩小和一点反馈","authors":"Andrey Veprikov , Alexander Bogdanov , Vladislav Minashkin , Aleksandr Beznosikov","doi":"10.1016/j.chaos.2024.115654","DOIUrl":null,"url":null,"abstract":"<div><div>This paper deals with the black-box optimization problem. In this setup, we do not have access to the gradient of the objective function, therefore, we need to estimate it somehow. We propose a new type of approximation <span>JAGUAR</span>, that memorizes information from previous iterations and requires <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mn>1</mn><mo>)</mo></mrow></mrow></math></span> oracle calls. We implement this approximation in the Frank–Wolfe and Gradient Descent algorithms and prove the convergence of these methods with different types of zero-order oracle. Our theoretical analysis covers scenarios of non-convex, convex and PL-condition cases. Also in this paper, we consider the stochastic minimization problem on the set <span><math><mi>Q</mi></math></span> with noise in the zero-order oracle; this setup is quite unpopular in the literature, but we prove that the <span>JAGUAR</span> approximation is robust not only in deterministic minimization problems, but also in the stochastic case. We perform experiments to compare our gradient estimator with those already known in the literature and confirm the dominance of our methods.</div></div>","PeriodicalId":9764,"journal":{"name":"Chaos Solitons & Fractals","volume":"189 ","pages":"Article 115654"},"PeriodicalIF":5.3000,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"New aspects of black box conditional gradient: Variance reduction and one point feedback\",\"authors\":\"Andrey Veprikov , Alexander Bogdanov , Vladislav Minashkin , Aleksandr Beznosikov\",\"doi\":\"10.1016/j.chaos.2024.115654\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This paper deals with the black-box optimization problem. In this setup, we do not have access to the gradient of the objective function, therefore, we need to estimate it somehow. We propose a new type of approximation <span>JAGUAR</span>, that memorizes information from previous iterations and requires <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mn>1</mn><mo>)</mo></mrow></mrow></math></span> oracle calls. We implement this approximation in the Frank–Wolfe and Gradient Descent algorithms and prove the convergence of these methods with different types of zero-order oracle. Our theoretical analysis covers scenarios of non-convex, convex and PL-condition cases. Also in this paper, we consider the stochastic minimization problem on the set <span><math><mi>Q</mi></math></span> with noise in the zero-order oracle; this setup is quite unpopular in the literature, but we prove that the <span>JAGUAR</span> approximation is robust not only in deterministic minimization problems, but also in the stochastic case. We perform experiments to compare our gradient estimator with those already known in the literature and confirm the dominance of our methods.</div></div>\",\"PeriodicalId\":9764,\"journal\":{\"name\":\"Chaos Solitons & Fractals\",\"volume\":\"189 \",\"pages\":\"Article 115654\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2024-10-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Chaos Solitons & Fractals\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0960077924012062\",\"RegionNum\":1,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chaos Solitons & Fractals","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0960077924012062","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
New aspects of black box conditional gradient: Variance reduction and one point feedback
This paper deals with the black-box optimization problem. In this setup, we do not have access to the gradient of the objective function, therefore, we need to estimate it somehow. We propose a new type of approximation JAGUAR, that memorizes information from previous iterations and requires oracle calls. We implement this approximation in the Frank–Wolfe and Gradient Descent algorithms and prove the convergence of these methods with different types of zero-order oracle. Our theoretical analysis covers scenarios of non-convex, convex and PL-condition cases. Also in this paper, we consider the stochastic minimization problem on the set with noise in the zero-order oracle; this setup is quite unpopular in the literature, but we prove that the JAGUAR approximation is robust not only in deterministic minimization problems, but also in the stochastic case. We perform experiments to compare our gradient estimator with those already known in the literature and confirm the dominance of our methods.
期刊介绍:
Chaos, Solitons & Fractals strives to establish itself as a premier journal in the interdisciplinary realm of Nonlinear Science, Non-equilibrium, and Complex Phenomena. It welcomes submissions covering a broad spectrum of topics within this field, including dynamics, non-equilibrium processes in physics, chemistry, and geophysics, complex matter and networks, mathematical models, computational biology, applications to quantum and mesoscopic phenomena, fluctuations and random processes, self-organization, and social phenomena.