Optimization Methods and Software最新文献

筛选
英文 中文
Feasible Newton methods for symmetric tensor Z-eigenvalue problems 对称张量z特征值问题的可行牛顿方法
Optimization Methods and Software Pub Date : 2022-11-14 DOI: 10.1080/10556788.2022.2142586
Jiefeng Xu, Donghui Li, Xueli Bai
{"title":"Feasible Newton methods for symmetric tensor Z-eigenvalue problems","authors":"Jiefeng Xu, Donghui Li, Xueli Bai","doi":"10.1080/10556788.2022.2142586","DOIUrl":"https://doi.org/10.1080/10556788.2022.2142586","url":null,"abstract":"Finding a Z-eigenpair of a symmetric tensor is equivalent to finding a Karush–Kuhn–Tucker point of a sphere constrained minimization problem. Based on this equivalency, in this paper, we first propose a class of iterative methods to get a Z-eigenpair of a symmetric tensor. Each method can generate a sequence of feasible points such that the sequence of function evaluations is decreasing. These methods can be regarded as extensions of the descent methods for unconstrained optimization problems. We pay particular attention to the Newton method. We show that under appropriate conditions, the Newton method is globally and quadratically convergent. Moreover, after finitely many iterations, the unit steplength will always be accepted. We also propose a nonlinear equations-based Newton method and establish its global and quadratic convergence. In the end, we do several numerical experiments to test the proposed Newton methods. The results show that both Newton methods are very efficient.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116906531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Nonconvex equilibrium models for energy markets: exploiting price information to determine the existence of an equilibrium 能源市场的非凸均衡模型:利用价格信息来确定均衡的存在
Optimization Methods and Software Pub Date : 2022-11-11 DOI: 10.1080/10556788.2022.2117358
Julia Grübel, Olivier Huber, Lukas Hümbs, Max Klimm, Martin Schmidt, Alexandra Schwartz
{"title":"Nonconvex equilibrium models for energy markets: exploiting price information to determine the existence of an equilibrium","authors":"Julia Grübel, Olivier Huber, Lukas Hümbs, Max Klimm, Martin Schmidt, Alexandra Schwartz","doi":"10.1080/10556788.2022.2117358","DOIUrl":"https://doi.org/10.1080/10556788.2022.2117358","url":null,"abstract":"Motivated by examples from the energy sector, we consider market equilibrium problems (MEPs) involving players with nonconvex strategy spaces or objective functions, where the latter are assumed to be linear in market prices. We propose an algorithm that determines if an equilibrium of such an MEP exists and that computes an equilibrium in case of existence. Three key prerequisites have to be met. First, appropriate bounds on market prices have to be derived from necessary optimality conditions of some players. Second, a technical assumption is required for those prices that are not uniquely determined by the derived bounds. Third, nonconvex optimization problems have to be solved to global optimality. We test the algorithm on well-known instances from the power and gas literature that meet these three prerequisites. There, nonconvexities arise from considering the transmission system operator as an additional player besides producers and consumers who, e.g. switches lines or faces nonlinear physical laws. Our numerical results indicate that equilibria often exist, especially for the case of continuous nonconvexities in the context of gas market problems.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133639940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An approximate Newton-type proximal method using symmetric rank-one updating formula for minimizing the nonsmooth composite functions 利用对称秩一更新公式求非光滑复合函数极小化的近似牛顿型近端方法
Optimization Methods and Software Pub Date : 2022-11-11 DOI: 10.1080/10556788.2022.2142587
Z. Aminifard, S. Babaie-Kafaki
{"title":"An approximate Newton-type proximal method using symmetric rank-one updating formula for minimizing the nonsmooth composite functions","authors":"Z. Aminifard, S. Babaie-Kafaki","doi":"10.1080/10556788.2022.2142587","DOIUrl":"https://doi.org/10.1080/10556788.2022.2142587","url":null,"abstract":"Founded upon the scaled memoryless symmetric rank-one updating formula, we propose an approximation of the Newton-type proximal strategy for minimizing the nonsmooth composite functions. More exactly, to approximate the inverse Hessian of the smooth part of the objective function, a symmetric rank-one matrix is employed to straightly compute the search directions for a special category of well-known functions. Convergence of the given algorithm is argued with a nonmonotone backtracking line search adjusted for the corresponding nonsmooth model. Also, its practical advantages are computationally depicted in the two well-known real-world models.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124573520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Non-convex regularization and accelerated gradient algorithm for sparse portfolio selection 稀疏组合选择的非凸正则化和加速梯度算法
Optimization Methods and Software Pub Date : 2022-11-10 DOI: 10.1080/10556788.2022.2142580
Qian Li, Wei Zhang, Guoqiang Wang, Yanqin Bai
{"title":"Non-convex regularization and accelerated gradient algorithm for sparse portfolio selection","authors":"Qian Li, Wei Zhang, Guoqiang Wang, Yanqin Bai","doi":"10.1080/10556788.2022.2142580","DOIUrl":"https://doi.org/10.1080/10556788.2022.2142580","url":null,"abstract":"In portfolio optimization, non-convex regularization has recently been recognized as an important approach to promote sparsity, while countervailing the shortcomings of convex penalty. In this paper, we customize the non-convex piecewise quadratic approximation (PQA) function considering the background of portfolio management and present the PQA regularized mean–variance model (PMV). By exposing the feature of PMV, we prove that a KKT point of PMV is a local minimizer if the regularization parameter satisfies a mild condition. Besides, the theoretical sparsity of PMV is analysed, which is associated with the regularization parameter and the weight parameter. To solve this model, we introduce the accelerated proximal gradient (APG) algorithm, whose improved linear convergence rate compared with proximal gradient (PG) algorithm is developed. Moreover, the optimal accelerated parameter of APG algorithm for PMV is attained. These theoretical results are further illustrated with numerical experiments. Finally, empirical analysis demonstrates that the proposed model has a better out-of-sample performance and a lower turnover than many other existing models on the tested datasets.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129029857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Sparse convex optimization toolkit: a mixed-integer framework 稀疏凸优化工具箱:一个混合整数框架
Optimization Methods and Software Pub Date : 2022-10-30 DOI: 10.1080/10556788.2023.2222429
A. Olama, E. Camponogara, Jan Kronqvist
{"title":"Sparse convex optimization toolkit: a mixed-integer framework","authors":"A. Olama, E. Camponogara, Jan Kronqvist","doi":"10.1080/10556788.2023.2222429","DOIUrl":"https://doi.org/10.1080/10556788.2023.2222429","url":null,"abstract":"This paper proposes an open-source distributed solver for solving Sparse Convex Optimization (SCO) problems over computational networks. Motivated by past algorithmic advances in mixed-integer optimization, the Sparse Convex Optimization Toolkit (SCOT) adopts a mixed-integer approach to find exact solutions to SCO problems. In particular, SCOT brings together various techniques to transform the original SCO problem into an equivalent convex Mixed-Integer Nonlinear Programming (MINLP) problem that can benefit from high-performance and parallel computing platforms. To solve the equivalent mixed-integer problem, we present the Distributed Hybrid Outer Approximation (DiHOA) algorithm that builds upon the LP/NLP based branch-and-bound and is tailored for this specific problem structure. The DiHOA algorithm combines the so-called single- and multi-tree outer approximation, naturally integrates a decentralized algorithm for distributed convex nonlinear subproblems, and utilizes enhancement techniques such as quadratic cuts. Finally, we present detailed computational experiments that show the benefit of our solver through numerical benchmarks on 140 SCO problems with distributed datasets. To show the overall efficiency of SCOT we also provide performance profiles comparing SCOT to other state-of-the-art MINLP solvers.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115584004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Linear programming with nonparametric penalty programs and iterated thresholding 非参数惩罚规划与迭代阈值的线性规划
Optimization Methods and Software Pub Date : 2022-10-13 DOI: 10.1080/10556788.2022.2117356
Jeffery Kline, Glenn M. Fung
{"title":"Linear programming with nonparametric penalty programs and iterated thresholding","authors":"Jeffery Kline, Glenn M. Fung","doi":"10.1080/10556788.2022.2117356","DOIUrl":"https://doi.org/10.1080/10556788.2022.2117356","url":null,"abstract":"It is known [Mangasarian, A Newton method for linear programming, J. Optim. Theory Appl. 121 (2004), pp. 1–18] that every linear program can be solved exactly by minimizing an unconstrained quadratic penalty program. The penalty program is parameterized by a scalar t>0, and one is able to solve the original linear program in this manner when t is selected larger than a finite, but unknown . In this paper, we show that every linear program can be solved using the solution to a parameter-free penalty program. We also characterize the solutions to the quadratic penalty programs using fixed points of certain nonexpansive maps. This leads to an iterative thresholding algorithm that converges to a desired limit point. We show in numerical experiments that this iterative method can outperform a variety of standard quadratic program solvers. Finally, we show that for every , the solution one obtains by solving a parameterized penalty program is guaranteed to lie in the feasible set of the original linear program.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130445141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementation of a projection and rescaling algorithm for second-order conic feasibility problems 二阶经济可行性问题的投影和缩放算法的实现
Optimization Methods and Software Pub Date : 2022-10-06 DOI: 10.1080/10556788.2022.2119234
Javier F. Pena, Negar Soheili
{"title":"Implementation of a projection and rescaling algorithm for second-order conic feasibility problems","authors":"Javier F. Pena, Negar Soheili","doi":"10.1080/10556788.2022.2119234","DOIUrl":"https://doi.org/10.1080/10556788.2022.2119234","url":null,"abstract":"This paper documents a computational implementation of a projection and rescaling algorithm for solving one of the alternative feasibility problems where L is a linear subspace in , is its orthogonal complement, and is the interior of a direct product of second order cones. The gist of the projection and rescaling algorithm is to enhance a low-cost first-order method (a basic procedure) with an adaptive reconditioning transformation (a rescaling step). We give a full description of a Python implementation of this algorithm and present multiple sets of numerical experiments on synthetic problem instances with varied levels of conditioning. Our computational experiments provide promising evidence of the effectiveness of the projection and rescaling algorithm. Our Python code is publicly available. Furthermore, the simplicity of the algorithm makes a computational implementation in other environments completely straightforward.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123302875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On minty variational principle for quasidifferentiable vector optimization problems 拟可微向量优化问题的极小变分原理
Optimization Methods and Software Pub Date : 2022-09-30 DOI: 10.1080/10556788.2022.2119235
H. Singh, Vivek Laha
{"title":"On minty variational principle for quasidifferentiable vector optimization problems","authors":"H. Singh, Vivek Laha","doi":"10.1080/10556788.2022.2119235","DOIUrl":"https://doi.org/10.1080/10556788.2022.2119235","url":null,"abstract":"This paper deals with quasidifferentiable vector optimization problems involving invex functions wrt convex compact sets. We present vector variational-like inequalities of Minty type and of Stampacchia type in terms of quasidifferentials denoted by (QMVVLI) and (QSVVLI), respectively. By utilizing these variational inequalities, we infer vital and adequate optimality conditions for an efficient solution of the quasidifferentiable vector optimization problem involving invex functions wrt convex compact sets. We also establish various results for the solutions of the corresponding weak versions of the vector variational-like inequalities in terms of quasidifferentials.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122983922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
HyKKT: a hybrid direct-iterative method for solving KKT linear systems HyKKT:求解KKT线性系统的混合直接迭代法
Optimization Methods and Software Pub Date : 2022-09-27 DOI: 10.1080/10556788.2022.2124990
Shaked Regev, Nai-yuan Chiang, Eric F Darve, C. Petra, M. Saunders, K. Swirydowicz, Slaven Pelevs
{"title":"HyKKT: a hybrid direct-iterative method for solving KKT linear systems","authors":"Shaked Regev, Nai-yuan Chiang, Eric F Darve, C. Petra, M. Saunders, K. Swirydowicz, Slaven Pelevs","doi":"10.1080/10556788.2022.2124990","DOIUrl":"https://doi.org/10.1080/10556788.2022.2124990","url":null,"abstract":"We propose a solution strategy for the large indefinite linear systems arising in interior methods for nonlinear optimization. The method is suitable for implementation on hardware accelerators such as graphical processing units (GPUs). The current gold standard for sparse indefinite systems is the LBLT factorization where is a lower triangular matrix and is or block diagonal. However, this requires pivoting, which substantially increases communication cost and degrades performance on GPUs. Our approach solves a large indefinite system by solving multiple smaller positive definite systems, using an iterative solver on the Schur complement and an inner direct solve (via Cholesky factorization) within each iteration. Cholesky is stable without pivoting, thereby reducing communication and allowing reuse of the symbolic factorization. We demonstrate the practicality of our approach on large optimal power flow problems and show that it can efficiently utilize GPUs and outperform LBLT factorization of the full system.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":" 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113948632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Stochastic distributed learning with gradient quantization and double-variance reduction 基于梯度量化和双方差约简的随机分布式学习
Optimization Methods and Software Pub Date : 2022-09-27 DOI: 10.1080/10556788.2022.2117355
Samuel Horváth, D. Kovalev, Konstantin Mishchenko, Peter Richtárik, S. Stich
{"title":"Stochastic distributed learning with gradient quantization and double-variance reduction","authors":"Samuel Horváth, D. Kovalev, Konstantin Mishchenko, Peter Richtárik, S. Stich","doi":"10.1080/10556788.2022.2117355","DOIUrl":"https://doi.org/10.1080/10556788.2022.2117355","url":null,"abstract":"ABSTRACT We consider distributed optimization over several devices, each sending incremental model updates to a central server. This setting is considered, for instance, in federated learning. Various schemes have been designed to compress the model updates in order to reduce the overall communication cost. However, existing methods suffer from a significant slowdown due to additional variance coming from the compression operator and as a result, only converge sublinearly. What is needed is a variance reduction technique for taming the variance introduced by compression. We propose the first methods that achieve linear convergence for arbitrary compression operators. For strongly convex functions with condition number κ, distributed among n machines with a finite-sum structure, each worker having less than m components, we also (i) give analysis for the weakly convex and the non-convex cases and (ii) verify in experiments that our novel variance reduced schemes are more efficient than the baselines. Moreover, we show theoretically that as the number of devices increases, higher compression levels are possible without this affecting the overall number of communications in comparison with methods that do not perform any compression. This leads to a significant reduction in communication cost. Our general analysis allows to pick the most suitable compression for each problem, finding the right balance between additional variance and communication savings. Finally, we also (iii) give analysis for arbitrary quantized updates.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116264618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信