EURO Journal on Computational Optimization最新文献

筛选
英文 中文
A simplified convergence theory for Byzantine resilient stochastic gradient descent 拜占庭弹性随机梯度下降的简化收敛理论
IF 2.4
EURO Journal on Computational Optimization Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100038
Lindon Roberts , Edward Smyth
{"title":"A simplified convergence theory for Byzantine resilient stochastic gradient descent","authors":"Lindon Roberts ,&nbsp;Edward Smyth","doi":"10.1016/j.ejco.2022.100038","DOIUrl":"10.1016/j.ejco.2022.100038","url":null,"abstract":"<div><p>In distributed learning, a central server trains a model according to updates provided by nodes holding local data samples. In the presence of one or more malicious servers sending incorrect information (a Byzantine adversary), standard algorithms for model training such as stochastic gradient descent (SGD) fail to converge. In this paper, we present a simplified convergence theory for the generic Byzantine Resilient SGD method originally proposed by Blanchard et al. (2017) <span>[3]</span>. Compared to the existing analysis, we shown convergence to a stationary point in expectation under standard assumptions on the (possibly nonconvex) objective function and flexible assumptions on the stochastic gradients.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100038"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000144/pdfft?md5=bbd4aa4ea37b8349470f121ce86051dd&pid=1-s2.0-S2192440622000144-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123785179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exponential extrapolation memory for tabu search 禁忌搜索的指数外推记忆
IF 2.4
EURO Journal on Computational Optimization Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100028
Håkon Bentsen, Arild Hoff, Lars Magnus Hvattum
{"title":"Exponential extrapolation memory for tabu search","authors":"Håkon Bentsen,&nbsp;Arild Hoff,&nbsp;Lars Magnus Hvattum","doi":"10.1016/j.ejco.2022.100028","DOIUrl":"10.1016/j.ejco.2022.100028","url":null,"abstract":"<div><p>Tabu search is a well-established metaheuristic framework for solving hard combinatorial optimization problems. At its core, the method uses different forms of memory to guide a local search through the solution space so as to identify high-quality local optima while avoiding getting stuck in the vicinity of any particular local optimum. This paper examines characteristics of moves that can be exploited to make good decisions about steps that lead away from recently visited local optima and towards a new local optimum. Our approach uses a new type of adaptive memory based on a construction called exponential extrapolation. The memory operates by means of threshold inequalities that ensure selected moves will not lead to a specified number of most recently encountered local optima. Computational experiments on a set of one hundred different benchmark instances for the binary integer programming problem suggest that exponential extrapolation is a useful type of memory to incorporate into a tabu search.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100028"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000041/pdfft?md5=d79a522b1d114e009dc737ac4d866cee&pid=1-s2.0-S2192440622000041-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126242208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A reinforcement learning approach to the stochastic cutting stock problem 随机切削库存问题的强化学习方法
IF 2.4
EURO Journal on Computational Optimization Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100027
Anselmo R. Pitombeira-Neto , Arthur H.F. Murta
{"title":"A reinforcement learning approach to the stochastic cutting stock problem","authors":"Anselmo R. Pitombeira-Neto ,&nbsp;Arthur H.F. Murta","doi":"10.1016/j.ejco.2022.100027","DOIUrl":"10.1016/j.ejco.2022.100027","url":null,"abstract":"<div><p>We propose a formulation of the stochastic cutting stock problem as a discounted infinite-horizon Markov decision process. At each decision epoch, given current inventory of items, an agent chooses in which patterns to cut objects in stock in anticipation of the unknown demand. An optimal solution corresponds to a policy that associates each state with a decision and minimizes the expected total cost. Since exact algorithms scale exponentially with the state-space dimension, we develop a heuristic solution approach based on reinforcement learning. We propose an approximate policy iteration algorithm in which we apply a linear model to approximate the action-value function of a policy. Policy evaluation is performed by solving the projected Bellman equation from a sample of state transitions, decisions and costs obtained by simulation. Due to the large decision space, policy improvement is performed via the cross-entropy method. Computational experiments are carried out with the use of realistic data to illustrate the application of the algorithm. Heuristic policies obtained with polynomial and Fourier basis functions are compared with myopic and random policies. Results indicate the possibility of obtaining policies capable of adequately controlling inventories with an average cost up to 80% lower than the cost obtained by a myopic policy.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100027"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S219244062200003X/pdfft?md5=135d32e50b9857c32c1577a7a14985fc&pid=1-s2.0-S219244062200003X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89403511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Decentralized personalized federated learning: Lower bounds and optimal algorithm for all personalization modes 分散个性化联邦学习:所有个性化模式的下界和最优算法
IF 2.4
EURO Journal on Computational Optimization Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100041
Abdurakhmon Sadiev , Ekaterina Borodich , Aleksandr Beznosikov , Darina Dvinskikh , Saveliy Chezhegov , Rachael Tappenden , Martin Takáč , Alexander Gasnikov
{"title":"Decentralized personalized federated learning: Lower bounds and optimal algorithm for all personalization modes","authors":"Abdurakhmon Sadiev ,&nbsp;Ekaterina Borodich ,&nbsp;Aleksandr Beznosikov ,&nbsp;Darina Dvinskikh ,&nbsp;Saveliy Chezhegov ,&nbsp;Rachael Tappenden ,&nbsp;Martin Takáč ,&nbsp;Alexander Gasnikov","doi":"10.1016/j.ejco.2022.100041","DOIUrl":"10.1016/j.ejco.2022.100041","url":null,"abstract":"<div><p>This paper considers the problem of decentralized, personalized federated learning. For centralized personalized federated learning, a penalty that measures the deviation from the local model and its average, is often added to the objective function. However, in a decentralized setting this penalty is expensive in terms of communication costs, so here, a different penalty — one that is built to respect the structure of the underlying computational network — is used instead. We present lower bounds on the communication and local computation costs for this problem formulation and we also present provably optimal methods for decentralized personalized federated learning. Numerical experiments are presented to demonstrate the practical performance of our methods.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100041"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S219244062200017X/pdfft?md5=e8af747bbfba4e47278ad3d8a99e3881&pid=1-s2.0-S219244062200017X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122953591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Twenty years of EUROPT, the EURO working group on Continuous Optimization 二十年的EUROPT,欧洲持续优化工作组
IF 2.4
EURO Journal on Computational Optimization Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100039
Sonia Cafieri , Tatiana Tchemisova , Gerhard-Wilhelm Weber
{"title":"Twenty years of EUROPT, the EURO working group on Continuous Optimization","authors":"Sonia Cafieri ,&nbsp;Tatiana Tchemisova ,&nbsp;Gerhard-Wilhelm Weber","doi":"10.1016/j.ejco.2022.100039","DOIUrl":"10.1016/j.ejco.2022.100039","url":null,"abstract":"<div><p>EUROPT, the Continuous Optimization working group of EURO, celebrated its 20 years of activity in 2020. We trace the history of this working group by presenting the major milestones that have led to its current structure and organization and its major trademarks, such as the annual EUROPT workshop and the EUROPT Fellow recognition.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100039"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000156/pdfft?md5=d70136dd19d5184ddafe323e89eb1929&pid=1-s2.0-S2192440622000156-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129955865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
New neighborhoods and an iterated local search algorithm for the generalized traveling salesman problem 广义旅行商问题的新邻域及迭代局部搜索算法
IF 2.4
EURO Journal on Computational Optimization Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100029
Jeanette Schmidt, Stefan Irnich
{"title":"New neighborhoods and an iterated local search algorithm for the generalized traveling salesman problem","authors":"Jeanette Schmidt,&nbsp;Stefan Irnich","doi":"10.1016/j.ejco.2022.100029","DOIUrl":"10.1016/j.ejco.2022.100029","url":null,"abstract":"<div><p>For a given graph with a vertex set that is partitioned into clusters, the generalized traveling salesman problem (GTSP) is the problem of finding a cost-minimal cycle that contains exactly one vertex of every cluster. We introduce three new GTSP neighborhoods that allow the simultaneous permutation of the sequence of the clusters and the selection of vertices from each cluster. The three neighborhoods and some known neighborhoods from the literature are combined into an effective iterated local search (ILS) for the GTSP. The ILS performs a straightforward random neighborhood selection within the local search and applies an ordinary record-to-record ILS acceptance criterion. The computational experiments on four symmetric standard GTSP libraries show that, with some purposeful refinements, the ILS can compete with state-of-the-art GTSP algorithms.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100029"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000053/pdfft?md5=f5688517686dac40484c0d65534f3440&pid=1-s2.0-S2192440622000053-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130152340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Chance-constrained optimization under limited distributional information: A review of reformulations based on sampling and distributional robustness 有限分布信息下的机会约束优化:基于抽样和分布鲁棒性的重新表述综述
IF 2.4
EURO Journal on Computational Optimization Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100030
Simge Küçükyavuz , Ruiwei Jiang
{"title":"Chance-constrained optimization under limited distributional information: A review of reformulations based on sampling and distributional robustness","authors":"Simge Küçükyavuz ,&nbsp;Ruiwei Jiang","doi":"10.1016/j.ejco.2022.100030","DOIUrl":"10.1016/j.ejco.2022.100030","url":null,"abstract":"<div><p>Chance-constrained programming (CCP) is one of the most difficult classes of optimization problems that has attracted the attention of researchers since the 1950s. In this survey, we focus on cases when only limited information on the distribution is available, such as a sample from the distribution, or the moments of the distribution. We first review recent developments in mixed-integer linear formulations of chance-constrained programs that arise from finite discrete distributions (or sample average approximation). We highlight successful reformulations and decomposition techniques that enable the solution of large-scale instances. We then review active research in distributionally robust CCP, which is a framework to address the ambiguity in the distribution of the random data. The focal point of our review is on scalable formulations that can be readily implemented with state-of-the-art optimization software. Furthermore, we highlight the prevalence of CCPs with a review of applications across multiple domains.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100030"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000065/pdfft?md5=9e123764c3bf29d9a58fa0f64cbc4b9a&pid=1-s2.0-S2192440622000065-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131035749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Direct nonlinear acceleration 直接非线性加速度
IF 2.4
EURO Journal on Computational Optimization Pub Date : 2022-01-01 DOI: 10.1016/j.ejco.2022.100047
Aritra Dutta , El Houcine Bergou , Yunming Xiao , Marco Canini , Peter Richtárik
{"title":"Direct nonlinear acceleration","authors":"Aritra Dutta ,&nbsp;El Houcine Bergou ,&nbsp;Yunming Xiao ,&nbsp;Marco Canini ,&nbsp;Peter Richtárik","doi":"10.1016/j.ejco.2022.100047","DOIUrl":"10.1016/j.ejco.2022.100047","url":null,"abstract":"<div><p>Optimization acceleration techniques such as momentum play a key role in state-of-the-art machine learning algorithms. Recently, generic vector sequence extrapolation techniques, such as regularized nonlinear acceleration (RNA) of Scieur et al. <span>[22]</span>, were proposed and shown to accelerate fixed point iterations. In contrast to RNA which computes extrapolation coefficients by (approximately) setting the gradient of the objective function to zero at the extrapolated point, we propose a more direct approach, which we call <em>direct nonlinear acceleration (DNA)</em>. In DNA, we aim to minimize (an approximation of) the function value at the extrapolated point instead. We adopt a regularized approach with regularizers designed to prevent the model from entering a region in which the functional approximation is less precise. While the computational cost of DNA is comparable to that of RNA, our direct approach significantly outperforms RNA on both synthetic and real-world datasets. While the focus of this paper is on convex problems, we obtain very encouraging results in accelerating the training of neural networks.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100047"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000235/pdfft?md5=1af83969ee833bb0a8954f808f6ca4ee&pid=1-s2.0-S2192440622000235-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131687887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
First-Order Methods for Convex Optimization 凸优化的一阶方法
IF 2.4
EURO Journal on Computational Optimization Pub Date : 2021-01-01 DOI: 10.1016/j.ejco.2021.100015
Pavel Dvurechensky , Shimrit Shtern , Mathias Staudigl
{"title":"First-Order Methods for Convex Optimization","authors":"Pavel Dvurechensky ,&nbsp;Shimrit Shtern ,&nbsp;Mathias Staudigl","doi":"10.1016/j.ejco.2021.100015","DOIUrl":"10.1016/j.ejco.2021.100015","url":null,"abstract":"<div><p>First-order methods for solving convex optimization problems have been at the forefront of mathematical optimization in the last 20 years. The rapid development of this important class of algorithms is motivated by the success stories reported in various applications, including most importantly machine learning, signal processing, imaging and control theory. First-order methods have the potential to provide low accuracy solutions at low computational complexity which makes them an attractive set of tools in large-scale optimization problems. In this survey, we cover a number of key developments in gradient-based optimization methods. This includes non-Euclidean extensions of the classical proximal gradient method, and its accelerated versions. Additionally we survey recent developments within the class of projection-free methods, and proximal versions of primal-dual schemes. We give complete proofs for various key results, and highlight the unifying aspects of several optimization algorithms.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"9 ","pages":"Article 100015"},"PeriodicalIF":2.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440621001428/pdfft?md5=19763cbf839252d3f78a91ae92c0f36f&pid=1-s2.0-S2192440621001428-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128767137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Conic optimization: A survey with special focus on copositive optimization and binary quadratic problems 二次优化:特别关注组合优化和二元二次问题的调查
IF 2.4
EURO Journal on Computational Optimization Pub Date : 2021-01-01 DOI: 10.1016/j.ejco.2021.100021
Mirjam Dür , Franz Rendl
{"title":"Conic optimization: A survey with special focus on copositive optimization and binary quadratic problems","authors":"Mirjam Dür ,&nbsp;Franz Rendl","doi":"10.1016/j.ejco.2021.100021","DOIUrl":"https://doi.org/10.1016/j.ejco.2021.100021","url":null,"abstract":"<div><p>A conic optimization problem is a problem involving a constraint that the optimization variable be in some closed convex cone. Prominent examples are linear programs (LP), second order cone programs (SOCP), semidefinite problems (SDP), and copositive problems. We survey recent progress made in this area. In particular, we highlight the connections between nonconvex quadratic problems, binary quadratic problems, and copositive optimization. We review how tight bounds can be obtained by relaxing the copositivity constraint to semidefiniteness, and we discuss the effect that different modelling techniques have on the quality of the bounds. We also provide some new techniques for lifting linear constraints and show how these can be used for stable set and coloring relaxations.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"9 ","pages":"Article 100021"},"PeriodicalIF":2.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440621001489/pdfft?md5=2fd9af7537cd98f646e5236b30d3d05f&pid=1-s2.0-S2192440621001489-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91979793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信