{"title":"Feasible Newton methods for symmetric tensor Z-eigenvalue problems","authors":"Jiefeng Xu, Donghui Li, Xueli Bai","doi":"10.1080/10556788.2022.2142586","DOIUrl":"https://doi.org/10.1080/10556788.2022.2142586","url":null,"abstract":"Finding a Z-eigenpair of a symmetric tensor is equivalent to finding a Karush–Kuhn–Tucker point of a sphere constrained minimization problem. Based on this equivalency, in this paper, we first propose a class of iterative methods to get a Z-eigenpair of a symmetric tensor. Each method can generate a sequence of feasible points such that the sequence of function evaluations is decreasing. These methods can be regarded as extensions of the descent methods for unconstrained optimization problems. We pay particular attention to the Newton method. We show that under appropriate conditions, the Newton method is globally and quadratically convergent. Moreover, after finitely many iterations, the unit steplength will always be accepted. We also propose a nonlinear equations-based Newton method and establish its global and quadratic convergence. In the end, we do several numerical experiments to test the proposed Newton methods. The results show that both Newton methods are very efficient.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116906531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julia Grübel, Olivier Huber, Lukas Hümbs, Max Klimm, Martin Schmidt, Alexandra Schwartz
{"title":"Nonconvex equilibrium models for energy markets: exploiting price information to determine the existence of an equilibrium","authors":"Julia Grübel, Olivier Huber, Lukas Hümbs, Max Klimm, Martin Schmidt, Alexandra Schwartz","doi":"10.1080/10556788.2022.2117358","DOIUrl":"https://doi.org/10.1080/10556788.2022.2117358","url":null,"abstract":"Motivated by examples from the energy sector, we consider market equilibrium problems (MEPs) involving players with nonconvex strategy spaces or objective functions, where the latter are assumed to be linear in market prices. We propose an algorithm that determines if an equilibrium of such an MEP exists and that computes an equilibrium in case of existence. Three key prerequisites have to be met. First, appropriate bounds on market prices have to be derived from necessary optimality conditions of some players. Second, a technical assumption is required for those prices that are not uniquely determined by the derived bounds. Third, nonconvex optimization problems have to be solved to global optimality. We test the algorithm on well-known instances from the power and gas literature that meet these three prerequisites. There, nonconvexities arise from considering the transmission system operator as an additional player besides producers and consumers who, e.g. switches lines or faces nonlinear physical laws. Our numerical results indicate that equilibria often exist, especially for the case of continuous nonconvexities in the context of gas market problems.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133639940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An approximate Newton-type proximal method using symmetric rank-one updating formula for minimizing the nonsmooth composite functions","authors":"Z. Aminifard, S. Babaie-Kafaki","doi":"10.1080/10556788.2022.2142587","DOIUrl":"https://doi.org/10.1080/10556788.2022.2142587","url":null,"abstract":"Founded upon the scaled memoryless symmetric rank-one updating formula, we propose an approximation of the Newton-type proximal strategy for minimizing the nonsmooth composite functions. More exactly, to approximate the inverse Hessian of the smooth part of the objective function, a symmetric rank-one matrix is employed to straightly compute the search directions for a special category of well-known functions. Convergence of the given algorithm is argued with a nonmonotone backtracking line search adjusted for the corresponding nonsmooth model. Also, its practical advantages are computationally depicted in the two well-known real-world models.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124573520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Non-convex regularization and accelerated gradient algorithm for sparse portfolio selection","authors":"Qian Li, Wei Zhang, Guoqiang Wang, Yanqin Bai","doi":"10.1080/10556788.2022.2142580","DOIUrl":"https://doi.org/10.1080/10556788.2022.2142580","url":null,"abstract":"In portfolio optimization, non-convex regularization has recently been recognized as an important approach to promote sparsity, while countervailing the shortcomings of convex penalty. In this paper, we customize the non-convex piecewise quadratic approximation (PQA) function considering the background of portfolio management and present the PQA regularized mean–variance model (PMV). By exposing the feature of PMV, we prove that a KKT point of PMV is a local minimizer if the regularization parameter satisfies a mild condition. Besides, the theoretical sparsity of PMV is analysed, which is associated with the regularization parameter and the weight parameter. To solve this model, we introduce the accelerated proximal gradient (APG) algorithm, whose improved linear convergence rate compared with proximal gradient (PG) algorithm is developed. Moreover, the optimal accelerated parameter of APG algorithm for PMV is attained. These theoretical results are further illustrated with numerical experiments. Finally, empirical analysis demonstrates that the proposed model has a better out-of-sample performance and a lower turnover than many other existing models on the tested datasets.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129029857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sparse convex optimization toolkit: a mixed-integer framework","authors":"A. Olama, E. Camponogara, Jan Kronqvist","doi":"10.1080/10556788.2023.2222429","DOIUrl":"https://doi.org/10.1080/10556788.2023.2222429","url":null,"abstract":"This paper proposes an open-source distributed solver for solving Sparse Convex Optimization (SCO) problems over computational networks. Motivated by past algorithmic advances in mixed-integer optimization, the Sparse Convex Optimization Toolkit (SCOT) adopts a mixed-integer approach to find exact solutions to SCO problems. In particular, SCOT brings together various techniques to transform the original SCO problem into an equivalent convex Mixed-Integer Nonlinear Programming (MINLP) problem that can benefit from high-performance and parallel computing platforms. To solve the equivalent mixed-integer problem, we present the Distributed Hybrid Outer Approximation (DiHOA) algorithm that builds upon the LP/NLP based branch-and-bound and is tailored for this specific problem structure. The DiHOA algorithm combines the so-called single- and multi-tree outer approximation, naturally integrates a decentralized algorithm for distributed convex nonlinear subproblems, and utilizes enhancement techniques such as quadratic cuts. Finally, we present detailed computational experiments that show the benefit of our solver through numerical benchmarks on 140 SCO problems with distributed datasets. To show the overall efficiency of SCOT we also provide performance profiles comparing SCOT to other state-of-the-art MINLP solvers.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115584004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Linear programming with nonparametric penalty programs and iterated thresholding","authors":"Jeffery Kline, Glenn M. Fung","doi":"10.1080/10556788.2022.2117356","DOIUrl":"https://doi.org/10.1080/10556788.2022.2117356","url":null,"abstract":"It is known [Mangasarian, A Newton method for linear programming, J. Optim. Theory Appl. 121 (2004), pp. 1–18] that every linear program can be solved exactly by minimizing an unconstrained quadratic penalty program. The penalty program is parameterized by a scalar t>0, and one is able to solve the original linear program in this manner when t is selected larger than a finite, but unknown . In this paper, we show that every linear program can be solved using the solution to a parameter-free penalty program. We also characterize the solutions to the quadratic penalty programs using fixed points of certain nonexpansive maps. This leads to an iterative thresholding algorithm that converges to a desired limit point. We show in numerical experiments that this iterative method can outperform a variety of standard quadratic program solvers. Finally, we show that for every , the solution one obtains by solving a parameterized penalty program is guaranteed to lie in the feasible set of the original linear program.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130445141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Implementation of a projection and rescaling algorithm for second-order conic feasibility problems","authors":"Javier F. Pena, Negar Soheili","doi":"10.1080/10556788.2022.2119234","DOIUrl":"https://doi.org/10.1080/10556788.2022.2119234","url":null,"abstract":"This paper documents a computational implementation of a projection and rescaling algorithm for solving one of the alternative feasibility problems where L is a linear subspace in , is its orthogonal complement, and is the interior of a direct product of second order cones. The gist of the projection and rescaling algorithm is to enhance a low-cost first-order method (a basic procedure) with an adaptive reconditioning transformation (a rescaling step). We give a full description of a Python implementation of this algorithm and present multiple sets of numerical experiments on synthetic problem instances with varied levels of conditioning. Our computational experiments provide promising evidence of the effectiveness of the projection and rescaling algorithm. Our Python code is publicly available. Furthermore, the simplicity of the algorithm makes a computational implementation in other environments completely straightforward.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123302875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On minty variational principle for quasidifferentiable vector optimization problems","authors":"H. Singh, Vivek Laha","doi":"10.1080/10556788.2022.2119235","DOIUrl":"https://doi.org/10.1080/10556788.2022.2119235","url":null,"abstract":"This paper deals with quasidifferentiable vector optimization problems involving invex functions wrt convex compact sets. We present vector variational-like inequalities of Minty type and of Stampacchia type in terms of quasidifferentials denoted by (QMVVLI) and (QSVVLI), respectively. By utilizing these variational inequalities, we infer vital and adequate optimality conditions for an efficient solution of the quasidifferentiable vector optimization problem involving invex functions wrt convex compact sets. We also establish various results for the solutions of the corresponding weak versions of the vector variational-like inequalities in terms of quasidifferentials.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122983922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shaked Regev, Nai-yuan Chiang, Eric F Darve, C. Petra, M. Saunders, K. Swirydowicz, Slaven Pelevs
{"title":"HyKKT: a hybrid direct-iterative method for solving KKT linear systems","authors":"Shaked Regev, Nai-yuan Chiang, Eric F Darve, C. Petra, M. Saunders, K. Swirydowicz, Slaven Pelevs","doi":"10.1080/10556788.2022.2124990","DOIUrl":"https://doi.org/10.1080/10556788.2022.2124990","url":null,"abstract":"We propose a solution strategy for the large indefinite linear systems arising in interior methods for nonlinear optimization. The method is suitable for implementation on hardware accelerators such as graphical processing units (GPUs). The current gold standard for sparse indefinite systems is the LBLT factorization where is a lower triangular matrix and is or block diagonal. However, this requires pivoting, which substantially increases communication cost and degrades performance on GPUs. Our approach solves a large indefinite system by solving multiple smaller positive definite systems, using an iterative solver on the Schur complement and an inner direct solve (via Cholesky factorization) within each iteration. Cholesky is stable without pivoting, thereby reducing communication and allowing reuse of the symbolic factorization. We demonstrate the practicality of our approach on large optimal power flow problems and show that it can efficiently utilize GPUs and outperform LBLT factorization of the full system.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":" 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113948632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samuel Horváth, D. Kovalev, Konstantin Mishchenko, Peter Richtárik, S. Stich
{"title":"Stochastic distributed learning with gradient quantization and double-variance reduction","authors":"Samuel Horváth, D. Kovalev, Konstantin Mishchenko, Peter Richtárik, S. Stich","doi":"10.1080/10556788.2022.2117355","DOIUrl":"https://doi.org/10.1080/10556788.2022.2117355","url":null,"abstract":"ABSTRACT We consider distributed optimization over several devices, each sending incremental model updates to a central server. This setting is considered, for instance, in federated learning. Various schemes have been designed to compress the model updates in order to reduce the overall communication cost. However, existing methods suffer from a significant slowdown due to additional variance coming from the compression operator and as a result, only converge sublinearly. What is needed is a variance reduction technique for taming the variance introduced by compression. We propose the first methods that achieve linear convergence for arbitrary compression operators. For strongly convex functions with condition number κ, distributed among n machines with a finite-sum structure, each worker having less than m components, we also (i) give analysis for the weakly convex and the non-convex cases and (ii) verify in experiments that our novel variance reduced schemes are more efficient than the baselines. Moreover, we show theoretically that as the number of devices increases, higher compression levels are possible without this affecting the overall number of communications in comparison with methods that do not perform any compression. This leads to a significant reduction in communication cost. Our general analysis allows to pick the most suitable compression for each problem, finding the right balance between additional variance and communication savings. Finally, we also (iii) give analysis for arbitrary quantized updates.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116264618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}