Optimization Methods and Software最新文献

筛选
英文 中文
Preface to the special issue 特刊前言
Optimization Methods and Software Pub Date : 2021-11-02 DOI: 10.1080/10556788.2021.2061218
O. Khamisov, A. Eremeev, Vladimir Ushakov
{"title":"Preface to the special issue","authors":"O. Khamisov, A. Eremeev, Vladimir Ushakov","doi":"10.1080/10556788.2021.2061218","DOIUrl":"https://doi.org/10.1080/10556788.2021.2061218","url":null,"abstract":"This Special Issue of Optimization Methods and Software is devoted to the International Conference ‘Mathematical Optimization Theory and Operations Research’ (MOTOR 2019), held near Ekaterinburg city, Russia, on 8–12 July 2019. The conferences of MOTOR series are descendants of four well-known international and all-Russian conferences, which were held in Ural, Siberia, and the Far East for several decades. The talks presented at these conferences are frequently published in several special issues of well-known journals. To fulfil the standard number of pages per issue, some regular papers were added to this journal issue after the following three papers were originally submitted to the special issue. The paper ‘Cooperative Differential Games with Continuous Updating Using Hamil-ton–Jacobi–Bellman equation‘ by O. Petrosian, A. Tur, Z. Wang, H. Gao investigates cooperative differential games with structures defined for a closed time interval with a fixed duration. The problem of cooperative players’ behaviour is considered. This paper contains a very nice introduction and theoretical results as well as an illustrative example with a model of non-renewable resource extraction. Dynamic Programming for Travelling Salesman with Precedence Constraints: Parallel Morin–Marsten bounding’ implement a parallel Morin–Marsten Branch-and-Bound Scheme for Dynamic Programming and test it on conventional and bottleneck formulations of the Travelling Salesman with Precedence Constraints, using the benchmark instances from TSPLIB. In the experiment, the parallel scheme to scale well with the number of processors and showed significant memory savings, compared to the non-bounded dynamic programming, especially on the large-scale instances. Variational and methods for solving optimization problems, saddle point problems and variational inequalities are proposed. Smooth and nonsmooth cases are considered. This paper provides sufficient theoretical justification and quite comprehensive numerical testing results.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133159651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient hybrid conjugate gradient method with sufficient descent property for unconstrained optimization 一种有效的混合共轭梯度法,具有充分的下降性质,用于无约束优化
Optimization Methods and Software Pub Date : 2021-10-12 DOI: 10.1080/10556788.2021.1977808
Mina Lotfi, Seyed Mohammad Hosseini
{"title":"An efficient hybrid conjugate gradient method with sufficient descent property for unconstrained optimization","authors":"Mina Lotfi, Seyed Mohammad Hosseini","doi":"10.1080/10556788.2021.1977808","DOIUrl":"https://doi.org/10.1080/10556788.2021.1977808","url":null,"abstract":"In order to take advantage of the strong theoretical properties of the FR method and computational efficiency of the method, we present a new hybrid conjugate gradient method based on the convex combination of these methods. In our method, the search directions satisfy the sufficient descent condition independent of any line search. Under some standard assumptions, we established global convergence property of our proposed method for general functions. Numerical comparisons on some test problems from the CUTEst library illustrate the efficiency and robustness of our proposed method in practice.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124821633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Off-line exploration of rectangular cellular environments with a rectangular obstacle 具有矩形障碍物的矩形元胞环境的离线探索
Optimization Methods and Software Pub Date : 2021-10-11 DOI: 10.1080/10556788.2021.1977811
Fatemeh Keshavarz-Kohjerdi
{"title":"Off-line exploration of rectangular cellular environments with a rectangular obstacle","authors":"Fatemeh Keshavarz-Kohjerdi","doi":"10.1080/10556788.2021.1977811","DOIUrl":"https://doi.org/10.1080/10556788.2021.1977811","url":null,"abstract":"In this paper, we consider exploring a known rectangular cellular environment that has a rectangular obstacle using a mobile robot. The robot has to visit each cell and return to its starting cell. The goal is to find the shortest tour that visits all the cells. We give a linear-time algorithm that finds the exploration tour of optimal length. While the previous algorithms for environments with obstacles are approximation, the algorithm is presented in this paper is optimal. This algorithm also works for L-shaped and C-shaped environments. The main idea of the algorithm is, first, to find the longest simple exploring cycle, then extend it to include the unvisited cells.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134163811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Linear systems arising in interior methods for convex optimization: a symmetric formulation with bounded condition number 凸优化的内部方法中的线性系统:一个有界条件数的对称公式
Optimization Methods and Software Pub Date : 2021-10-08 DOI: 10.1080/10556788.2021.1965599
Alexandre Ghannad, D. Orban, Michael A. Saunders
{"title":"Linear systems arising in interior methods for convex optimization: a symmetric formulation with bounded condition number","authors":"Alexandre Ghannad, D. Orban, Michael A. Saunders","doi":"10.1080/10556788.2021.1965599","DOIUrl":"https://doi.org/10.1080/10556788.2021.1965599","url":null,"abstract":"We provide eigenvalues bounds for a new formulation of the step equations in interior methods for convex quadratic optimization. The matrix of our formulation, named , has bounded condition number, converges to a well-defined limit under strict complementarity, and has the same size as the traditional, ill-conditioned, saddle-point formulation. We evaluate the performance in the context of a Matlab object-oriented implementation of PDCO, an interior-point solver for minimizing a smooth convex function subject to linear constraints. The main benefit of our implementation, named PDCOO, is to separate the logic of the interior-point method from the formulation of the system used to compute a step at each iteration and the method used to solve the system. Thus, PDCOO allows easy addition of a new system formulation and/or solution method for experimentation. Our numerical experiments indicate that the formulation has the same storage requirements as the traditional ill-conditioned saddle-point formulation, and its condition is often more favourable than the unsymmetric block formulation.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129405932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Exact penalties for decomposable convex optimization problems 可分解凸优化问题的精确惩罚
Optimization Methods and Software Pub Date : 2021-09-28 DOI: 10.1080/10556788.2021.1977807
I. Konnov
{"title":"Exact penalties for decomposable convex optimization problems","authors":"I. Konnov","doi":"10.1080/10556788.2021.1977807","DOIUrl":"https://doi.org/10.1080/10556788.2021.1977807","url":null,"abstract":"We consider a general decomposable convex optimization problem. By using right-hand side allocation technique, it can be transformed into a collection of small dimensional optimization problems. The master problem is a convex non-smooth optimization problem. We propose to apply the exact non-smooth penalty method, which gives a solution of the initial problem under some fixed penalty parameter and provides the consistency of lower level problems. The master problem can be solved with a suitable non-smooth optimization method. The simplest of them is the custom subgradient projection method using the divergent series step-size rule without line-search, whose convergence may be, however, rather low. We suggest to enhance its step-size selection by using a two-speed rule. Preliminary results of computational experiments confirm efficiency of this technique.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131398337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Outer approximation algorithms for convex vector optimization problems 凸向量优化问题的外逼近算法
Optimization Methods and Software Pub Date : 2021-09-15 DOI: 10.1080/10556788.2023.2167994
Irem Nur Keskin, Firdevs Ulus
{"title":"Outer approximation algorithms for convex vector optimization problems","authors":"Irem Nur Keskin, Firdevs Ulus","doi":"10.1080/10556788.2023.2167994","DOIUrl":"https://doi.org/10.1080/10556788.2023.2167994","url":null,"abstract":"In this study, we present a general framework of outer approximation algorithms to solve convex vector optimization problems, in which the Pascoletti-Serafini (PS) scalarization is solved iteratively. This scalarization finds the minimum ‘distance’ from a reference point, which is usually taken as a vertex of the current outer approximation, to the upper image through a given direction. We propose efficient methods to select the parameters (the reference point and direction vector) of the PS scalarization and analyse the effects of these on the overall performance of the algorithm. Different from the existing vertex selection rules from the literature, the proposed methods do not require solving additional single-objective optimization problems. Using some test problems, we conduct an extensive computational study where three different measures are set as the stopping criteria: the approximation error, the runtime, and the cardinality of the solution set. We observe that the proposed variants have satisfactory results, especially in terms of runtime compared to the existing variants from the literature.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126790041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Study on precoding optimization algorithms in massive MIMO system with multi-antenna users 多天线用户大规模MIMO系统预编码优化算法研究
Optimization Methods and Software Pub Date : 2021-07-28 DOI: 10.1080/10556788.2022.2091564
E. Bobrov, D. Kropotov, S. Troshin, D. Zaev
{"title":"Study on precoding optimization algorithms in massive MIMO system with multi-antenna users","authors":"E. Bobrov, D. Kropotov, S. Troshin, D. Zaev","doi":"10.1080/10556788.2022.2091564","DOIUrl":"https://doi.org/10.1080/10556788.2022.2091564","url":null,"abstract":"The paper studies the multi-user precoding problem as a non-convex optimization problem for wireless multiple input and multiple output (MIMO) systems. In our work, we approximate the target Spectral Efficiency function with a novel computationally simpler function. Then, we reduce the precoding problem to an un- constrained optimization task using a special differential projection method and solve it by the Quasi-Newton L-BFGS iterative procedure to achieve gains in ca- pacity. We are testing the proposed approach in several scenarios generated using Quadriga — open-source software for generating realistic radio channel impulse re- sponse. Our method shows monotonic improvement over heuristic methods with reasonable computation time. The proposed L-BFGS optimization scheme is novel in this area and shows a significant advantage over the standard approaches. The proposed method has a simple implementation and can be a good reference for other heuristic algorithms in this field.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131022860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Full-low evaluation methods for derivative-free optimization 无导数优化的全低评价方法
Optimization Methods and Software Pub Date : 2021-07-25 DOI: 10.1080/10556788.2022.2142582
A. Berahas, Oumaima Sohab, L. N. Vicente
{"title":"Full-low evaluation methods for derivative-free optimization","authors":"A. Berahas, Oumaima Sohab, L. N. Vicente","doi":"10.1080/10556788.2022.2142582","DOIUrl":"https://doi.org/10.1080/10556788.2022.2142582","url":null,"abstract":"We propose a new class of rigorous methods for derivative-free optimization with the aim of delivering efficient and robust numerical performance for functions of all types, from smooth to non-smooth, and under different noise regimes. To this end, we have developed a class of methods, called Full-Low Evaluation methods, organized around two main types of iterations. The first iteration type (called Full-Eval) is expensive in function evaluations, but exhibits good performance in the smooth and non-noisy cases. For the theory, we consider a line search based on an approximate gradient, backtracking until a sufficient decrease condition is satisfied. In practice, the gradient was approximated via finite differences, and the direction was calculated by a quasi-Newton step (BFGS). The second iteration type (called Low-Eval) is cheap in function evaluations, yet more robust in the presence of noise or non-smoothness. For the theory, we consider direct search, and in practice we use probabilistic direct search with one random direction and its negative. A switch condition from Full-Eval to Low-Eval iterations was developed based on the values of the line-search and direct-search stepsizes. If enough Full-Eval steps are taken, we derive a complexity result of gradient-descent type. Under failure of Full-Eval, the Low-Eval iterations become the drivers of convergence yielding non-smooth convergence. Full-Low Evaluation methods are shown to be efficient and robust in practice across problems with different levels of smoothness and noise.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121322005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Quadratic rates of asymptotic regularity for the Tikhonov–Mann iteration Tikhonov-Mann迭代的渐近正则性的二次率
Optimization Methods and Software Pub Date : 2021-07-15 DOI: 10.1080/10556788.2022.2060974
Horaţiu Cheval, Laurentiu Leustean
{"title":"Quadratic rates of asymptotic regularity for the Tikhonov–Mann iteration","authors":"Horaţiu Cheval, Laurentiu Leustean","doi":"10.1080/10556788.2022.2060974","DOIUrl":"https://doi.org/10.1080/10556788.2022.2060974","url":null,"abstract":"In this paper, we compute quadratic rates of asymptotic regularity for the Tikhonov–Mann iteration in W-hyperbolic spaces. This iteration is an extension to a nonlinear setting of the modified Mann iteration defined recently by Boţ, Csetnek and Meier in Hilbert spaces. Furthermore, we show that the Douglas–Rachford and forward-backward algorithms with Tikhonov regularization terms are special cases, in Hilbert spaces, of our Tikhonov–Mann iteration.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"1051 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125427346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A new multipoint symmetric secant method with a dense initial matrix 一种具有密集初始矩阵的多点对称割线新方法
Optimization Methods and Software Pub Date : 2021-07-13 DOI: 10.1080/10556788.2023.2167993
Jennifer B. Erway, Mostafa Rezapour
{"title":"A new multipoint symmetric secant method with a dense initial matrix","authors":"Jennifer B. Erway, Mostafa Rezapour","doi":"10.1080/10556788.2023.2167993","DOIUrl":"https://doi.org/10.1080/10556788.2023.2167993","url":null,"abstract":"In large-scale optimization, when either forming or storing Hessian matrices are prohibitively expensive, quasi-Newton methods are often used in lieu of Newton's method because they only require first-order information to approximate the true Hessian. Multipoint symmetric secant (MSS) methods can be thought of as generalizations of quasi-Newton methods in that they attempt to impose additional requirements on their approximation of the Hessian. Given an initial Hessian approximation, MSS methods generate a sequence of possibly-indefinite matrices using rank-2 updates to solve nonconvex unconstrained optimization problems. For practical reasons, up to now, the initialization has been a constant multiple of the identity matrix. In this paper, we propose a new limited-memory MSS method for large-scale nonconvex optimization that allows for dense initializations. Numerical results on the CUTEst test problems suggest that the MSS method using a dense initialization outperforms the standard initialization. Numerical results also suggest that this approach is competitive with both a basic L-SR1 trust-region method and an L-PSB method.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125947572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信