{"title":"Improved randomized approaches to the location of a conservative hyperplane","authors":"Xiaosong Ding, Jun Ma, Xiuming Li, Xi Chen","doi":"10.1007/s11590-024-02136-7","DOIUrl":"https://doi.org/10.1007/s11590-024-02136-7","url":null,"abstract":"<p>This paper presents improved approaches to the treatment of combinatorial challenges associated with the search process for conservative cuts arising in disjoint bilinear programming. We introduce a new randomized approach that leverages the active constraint information within a hyperplane containing the given local solution. It can restrict the search process to only one dimension and mitigate the impact of growing degeneracy imposed on computational loads. The utilization of recursion further refines our strategy by systematically reducing the number of adjacent vertices available for exchange. Extensive computational experiments validate that these approaches can significantly enhance computational efficiency to the scale of <span>(10^{-3})</span> s, particularly for those problems with high dimensions and degrees of degeneracy.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"175 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The modified second APG method for a class of nonconvex nonsmooth problems","authors":"Kexin Ren, Chunguang Liu, Lumiao Wang","doi":"10.1007/s11590-024-02132-x","DOIUrl":"https://doi.org/10.1007/s11590-024-02132-x","url":null,"abstract":"<p>In this paper, we consider <i> the modified second accelerated proximal gradient algorithm</i> (APG<span>(_{s})</span>) introduced in Lin and Liu (Optim Lett 13(4), 805–824, 2019), discuss the behaviour of this method on more general cases, prove the convergence properties under weaker assumptions. Finally, numerical experiments are performed to support our theoretical results.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"28 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A limited memory subspace minimization conjugate gradient algorithm for unconstrained optimization","authors":"Zexian Liu, Yu-Hong Dai, Hongwei Liu","doi":"10.1007/s11590-024-02131-y","DOIUrl":"https://doi.org/10.1007/s11590-024-02131-y","url":null,"abstract":"<p>Subspace minimization conjugate gradient (SMCG) methods are a class of quite efficient iterative methods for unconstrained optimization. The orthogonality is an important property of linear conjugate gradient method. It is however observed that the orthogonality of the gradients in linear conjugate gradient method is often lost, which usually causes slow convergence. Based on SMCG<span>(_)</span>BB (Liu and Liu in J Optim Theory Appl 180(3):879–906, 2019), we combine subspace minimization conjugate gradient method with the limited memory technique and present a limited memory subspace minimization conjugate gradient algorithm for unconstrained optimization. The proposed method includes two types of iterations: SMCG iteration and quasi-Newton (QN) iteration. In the SMCG iteration, the search direction is determined by solving a quadratic approximation problem, in which the important parameter is estimated based on some properties of the objective function at the current iterative point. In the QN iteration, a modified quasi-Newton method in the subspace is proposed to improve the orthogonality. Additionally, a modified strategy for choosing the initial stepsize is exploited. The global convergence of the proposed method is established under weak conditions. Some numerical results indicate that, for the tested functions in the CUTEr library, the proposed method has a great improvement over SMCG<span>(_)</span>BB, and it is comparable to the latest limited memory conjugate gradient software package CG<span>(_)</span>DESCENT (6.8) (Hager and Zhang in SIAM J Optim 23(4):2150–2168, 2013) and is also superior to the famous limited memory BFGS (L-BFGS) method.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"21 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An accelerated lyapunov function for Polyak’s Heavy-ball on convex quadratics","authors":"Antonio Orvieto","doi":"10.1007/s11590-024-02119-8","DOIUrl":"https://doi.org/10.1007/s11590-024-02119-8","url":null,"abstract":"<p>In 1964, Polyak showed that the Heavy-ball method, the simplest momentum technique, accelerates convergence of strongly-convex problems in the vicinity of the solution. While Nesterov later developed a globally accelerated version, Polyak’s original algorithm remains simpler and more widely used in applications such as deep learning. Despite this popularity, the question of whether Heavy-ball is also globally accelerated or not has not been fully answered yet, and no convincing counterexample has been provided. This is largely due to the difficulty in finding an effective Lyapunov function: indeed, most proofs of Heavy-ball acceleration in the strongly-convex quadratic setting rely on eigenvalue arguments. Our work adopts a different approach: studying momentum through the lens of quadratic invariants of simple harmonic oscillators. By utilizing the modified Hamiltonian of Störmer-Verlet integrators, we are able to construct a Lyapunov function that demonstrates an <span>(O(1/k^2))</span> rate for Heavy-ball in the case of convex quadratic problems. Our novel proof technique, though restricted to linear regression, is found to work well empirically also on non-quadratic convex problems, and thus provides insights on the structure of Lyapunov functions to be used in the general convex case. As such, our paper makes a promising first step towards potentially proving the acceleration of Polyak’s momentum method and we hope it inspires further research around this question.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"24 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Benchmark-based deviation and drawdown measures in portfolio optimization","authors":"Michael Zabarankin, Bogdan Grechuk, Dawei Hao","doi":"10.1007/s11590-024-02124-x","DOIUrl":"https://doi.org/10.1007/s11590-024-02124-x","url":null,"abstract":"<p>Understanding and modeling of agent’s risk/reward preferences is a central problem in various applications of risk management including investment science and portfolio theory in particular. One of the approaches is to axiomatically define a set of performance measures and to use a benchmark to identify a particular measure from that set by either inverse optimization or functional dominance. For example, such a benchmark could be the rate of return of an existing attractive financial instrument. This work introduces deviation and drawdown measures that incorporate rates of return of indicated financial instruments (benchmarks). For discrete distributions and discrete sample paths, portfolio problems with such measures are reduced to linear programs and solved based on historical data in cases of a single benchmark and three benchmarks used simultaneously. The optimal portfolios and corresponding benchmarks have similar expected/cumulative rates of return in sample and out of sample, but the former are considerably less volatile.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"74 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raul Garcia, Seyedmohammadhossein Hosseinian, Mallesh Pai, Andrew J. Schaefer
{"title":"Strategy investments in zero-sum games","authors":"Raul Garcia, Seyedmohammadhossein Hosseinian, Mallesh Pai, Andrew J. Schaefer","doi":"10.1007/s11590-024-02130-z","DOIUrl":"https://doi.org/10.1007/s11590-024-02130-z","url":null,"abstract":"<p>We propose an extension of two-player zero-sum games, where one player may select available actions for themselves and the opponent, subject to a budget constraint. We present a mixed-integer linear programming (MILP) formulation for the problem, provide analytical results regarding its solution, and discuss applications in the security and advertising domains. Our computational experiments demonstrate that heuristic approaches, on average, yield suboptimal solutions with at least a 20% relative gap with those obtained by the MILP formulation.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Woocheol Choi, Changbum Chun, Yoon Mo Jung, Sangwoon Yun
{"title":"On the linear convergence rate of Riemannian proximal gradient method","authors":"Woocheol Choi, Changbum Chun, Yoon Mo Jung, Sangwoon Yun","doi":"10.1007/s11590-024-02129-6","DOIUrl":"https://doi.org/10.1007/s11590-024-02129-6","url":null,"abstract":"<p>Composite optimization problems on Riemannian manifolds arise in applications such as sparse principal component analysis and dictionary learning. Recently, Huang and Wei introduced a Riemannian proximal gradient method (Huang and Wei in MP 194:371–413, 2022) and an inexact Riemannian proximal gradient method (Wen and Ke in COA 85:1–32, 2023), utilizing the retraction mapping to address these challenges. They established the sublinear convergence rate of the Riemannian proximal gradient method under the retraction convexity and a geometric condition on retractions, as well as the local linear convergence rate of the inexact Riemannian proximal gradient method under the Riemannian Kurdyka-Lojasiewicz property. In this paper, we demonstrate the linear convergence rate of the Riemannian proximal gradient method and the linear convergence rate of the proximal gradient method proposed in Chen et al. (SIAM J Opt 30:210–239, 2020) under strong retraction convexity. Additionally, we provide a counterexample that violates the geometric condition on retractions, which is crucial for establishing the sublinear convergence rate of the Riemannian proximal gradient method.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"5 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A modification of the forward–backward splitting method for monotone inclusions","authors":"Van Dung Nguyen","doi":"10.1007/s11590-024-02128-7","DOIUrl":"https://doi.org/10.1007/s11590-024-02128-7","url":null,"abstract":"<p>In this work, we propose a new splitting method for monotone inclusion problems with three operators in real Hilbert spaces, in which one is maximal monotone, one is monotone-Lipschitz and one is cocoercive. By specializing in two operator inclusion, we recover the forward–backward and the generalization of the reflected-forward–backward splitting methods as particular cases. The weak convergence of the algorithm under standard assumptions is established. The linear convergence rate of the proposed method is obtained under an additional condition like the strong monotonicity. We also give some theoretical comparisons to demonstrate the efficiency of the proposed method.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"2015 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dimitri J. Papageorgiou, Jan Kronqvist, Asha Ramanujam, James Kor, Youngdae Kim, Can Li
{"title":"Solution polishing via path relinking for continuous black-box optimization","authors":"Dimitri J. Papageorgiou, Jan Kronqvist, Asha Ramanujam, James Kor, Youngdae Kim, Can Li","doi":"10.1007/s11590-024-02127-8","DOIUrl":"https://doi.org/10.1007/s11590-024-02127-8","url":null,"abstract":"<p>When faced with a limited budget of function evaluations, state-of-the-art black-box optimization (BBO) solvers struggle to obtain globally, or sometimes even locally, optimal solutions. In such cases, one may pursue solution polishing, i.e., a computational method to improve (or “polish”) an incumbent solution, typically via some sort of evolutionary algorithm involving two or more solutions. While solution polishing in “white-box” optimization has existed for years, relatively little has been published regarding its application in costly-to-evaluate BBO. To fill this void, we explore two novel methods for performing solution polishing along one-dimensional curves rather than along straight lines. We introduce a convex quadratic program that can generate promising curves through multiple elite solutions, i.e., via path relinking, or around a single elite solution. In comparing four solution polishing techniques for continuous BBO, we show that solution polishing along a curve is competitive with solution polishing using a state-of-the-art BBO solver.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"40 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dual and generalized dual cones in Banach spaces","authors":"Akhtar A. Khan, Dezhou Kong, Jinlu Li","doi":"10.1007/s11590-024-02126-9","DOIUrl":"https://doi.org/10.1007/s11590-024-02126-9","url":null,"abstract":"<p>This paper proposes and analyzes the notion of dual cones associated with the metric projection and generalized projection in Banach spaces. We show that the dual cones, related to the metric projection and generalized metric projection, lose many important properties in transitioning from Hilbert spaces to Banach spaces. We also propose and analyze the notions of faces and visions in Banach spaces and relate them to metric projection and generalized projection. We provide many illustrative examples to give insight into the given results</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"188 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141513847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}