{"title":"A characterization of maximal homogeneous-quadratic-free sets","authors":"Gonzalo Muñoz, Joseph Paat, Felipe Serrano","doi":"10.1007/s10107-024-02092-1","DOIUrl":"https://doi.org/10.1007/s10107-024-02092-1","url":null,"abstract":"<p>The intersection cut framework was introduced by Balas in 1971 as a method for generating cutting planes in integer optimization. In this framework, one uses a full-dimensional convex <i>S</i>-free set, where <i>S</i> is the feasible region of the integer program, to derive a cut separating <i>S</i> from a non-integral vertex of a linear relaxation of <i>S</i>. Among all <i>S</i>-free sets, it is the inclusion-wise maximal ones that yield the strongest cuts. Recently, this framework has been extended beyond the integer case in order to obtain cutting planes in non-linear settings. In this work, we consider the specific setting when <i>S</i> is defined by a homogeneous quadratic inequality. In this ‘quadratic-free’ setting, every function <span>(Gamma : D^m rightarrow D^n)</span>, where <span>(D^k)</span> is the unit sphere in <span>(mathbb {R}^k)</span>, generates a representation of a quadratic-free set. While not every <span>(Gamma )</span> generates a maximal quadratic free set, it is the case that every full-dimensional maximal quadratic free set is generated by some <span>(Gamma )</span>. Our main result shows that the corresponding quadratic-free set is full-dimensional and maximal if and only if <span>(Gamma )</span> is non-expansive and satisfies a technical condition. This result yields a broader class of maximal <i>S</i>-free sets than previously known. Our result stems from a new characterization of maximal <i>S</i>-free sets (for general <i>S</i> beyond the quadratic setting) based on sequences that ‘expose’ inequalities defining the <i>S</i>-free set.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"25 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141152037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimal general factor problem and jump system intersection","authors":"Yusuke Kobayashi","doi":"10.1007/s10107-024-02098-9","DOIUrl":"https://doi.org/10.1007/s10107-024-02098-9","url":null,"abstract":"<p>In the optimal general factor problem, given a graph <span>(G=(V, E))</span> and a set <span>(B(v) subseteq {mathbb {Z}})</span> of integers for each <span>(v in V)</span>, we seek for an edge subset <i>F</i> of maximum cardinality subject to <span>(d_F(v) in B(v))</span> for <span>(v in V)</span>, where <span>(d_F(v))</span> denotes the number of edges in <i>F</i> incident to <i>v</i>. A recent crucial work by Dudycz and Paluch shows that this problem can be solved in polynomial time if each <i>B</i>(<i>v</i>) has no gap of length more than one. While their algorithm is very simple, its correctness proof is quite complicated. In this paper, we formulate the optimal general factor problem as the jump system intersection, and reveal when the algorithm by Dudycz and Paluch can be applied to this abstract form of the problem. By using this abstraction, we give another correctness proof of the algorithm, which is simpler than the original one. We also extend our result to the valuated case.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"45 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141152157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eranda Çela, Bettina Klinz, Stefan Lendl, Gerhard J. Woeginger, Lasse Wulf
{"title":"A linear time algorithm for linearizing quadratic and higher-order shortest path problems","authors":"Eranda Çela, Bettina Klinz, Stefan Lendl, Gerhard J. Woeginger, Lasse Wulf","doi":"10.1007/s10107-024-02086-z","DOIUrl":"https://doi.org/10.1007/s10107-024-02086-z","url":null,"abstract":"<p>An instance of the NP-hard Quadratic Shortest Path Problem (QSPP) is called linearizable iff it is equivalent to an instance of the classic Shortest Path Problem (SPP) on the same input digraph. The linearization problem for the QSPP (LinQSPP) decides whether a given QSPP instance is linearizable and determines the corresponding SPP instance in the positive case. We provide a novel linear time algorithm for the LinQSPP on acyclic digraphs which runs considerably faster than the previously best algorithm. The algorithm is based on a new insight revealing that the linearizability of the QSPP for acyclic digraphs can be seen as a local property. Our approach extends to the more general higher-order shortest path problem.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"1 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140941067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gaining or losing perspective for convex multivariate functions on box domains","authors":"Luze Xu, Jon Lee","doi":"10.1007/s10107-024-02087-y","DOIUrl":"https://doi.org/10.1007/s10107-024-02087-y","url":null,"abstract":"<p>Mixed-integer nonlinear optimization formulations of the disjunction between the origin and a polytope via a binary indicator variable is broadly used in nonlinear combinatorial optimization for modeling a fixed cost associated with carrying out a group of activities and a convex cost function associated with the levels of the activities. The perspective relaxation of such models is often used to solve to global optimality in a branch-and-bound context, but it typically requires suitable conic solvers and is not compatible with general-purpose NLP software in the presence of other classes of constraints. This motivates the investigation of when simpler but weaker relaxations may be adequate. Comparing the volume (i.e., Lebesgue measure) of the relaxations as a measure of tightness, we lift some of the results related to the simplex case to the box case. In order to compare the volumes of different relaxations in the box case, it is necessary to find an appropriate concave upper bound that preserves the convexity and is minimal, which is more difficult than in the simplex case. To address the challenge beyond the simplex case, the triangulation approach is used.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"28 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140941064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sample complexity analysis for adaptive optimization algorithms with stochastic oracles","authors":"Billy Jin, Katya Scheinberg, Miaolan Xie","doi":"10.1007/s10107-024-02078-z","DOIUrl":"https://doi.org/10.1007/s10107-024-02078-z","url":null,"abstract":"<p>Several classical adaptive optimization algorithms, such as line search and trust-region methods, have been recently extended to stochastic settings where function values, gradients, and Hessians in some cases, are estimated via stochastic oracles. Unlike the majority of stochastic methods, these methods do not use a pre-specified sequence of step size parameters, but adapt the step size parameter according to the estimated progress of the algorithm and use it to dictate the accuracy required from the stochastic oracles. The requirements on the stochastic oracles are, thus, also adaptive and the oracle costs can vary from iteration to iteration. The step size parameters in these methods can increase and decrease based on the perceived progress, but unlike the deterministic case they are not bounded away from zero due to possible oracle failures, and bounds on the step size parameter have not been previously derived. This creates obstacles in the total complexity analysis of such methods, because the oracle costs are typically decreasing in the step size parameter, and could be arbitrarily large as the step size parameter goes to 0. Thus, until now only the total iteration complexity of these methods has been analyzed. In this paper, we derive a lower bound on the step size parameter that holds with high probability for a large class of adaptive stochastic methods. We then use this lower bound to derive a framework for analyzing the expected and high probability total oracle complexity of any method in this class. Finally, we apply this framework to analyze the total sample complexity of two particular algorithms, STORM (Blanchet et al. in INFORMS J Optim 1(2):92–119, 2019) and SASS (Jin et al. in High probability complexity bounds for adaptive step search based on stochastic oracles, 2021. https://doi.org/10.48550/ARXIV.2106.06454), in the expected risk minimization problem.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"161 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Dadush, Friedrich Eisenbrand, Thomas Rothvoss
{"title":"From approximate to exact integer programming","authors":"Daniel Dadush, Friedrich Eisenbrand, Thomas Rothvoss","doi":"10.1007/s10107-024-02084-1","DOIUrl":"https://doi.org/10.1007/s10107-024-02084-1","url":null,"abstract":"<p>Approximate integer programming is the following: For a given convex body <span>(K subseteq {mathbb {R}}^n)</span>, either determine whether <span>(K cap {mathbb {Z}}^n)</span> is empty, or find an integer point in the convex body <span>(2cdot (K - c) +c)</span> which is <i>K</i>, scaled by 2 from its center of gravity <i>c</i>. Approximate integer programming can be solved in time <span>(2^{O(n)})</span> while the fastest known methods for exact integer programming run in time <span>(2^{O(n)} cdot n^n)</span>. So far, there are no efficient methods for integer programming known that are based on approximate integer programming. Our main contribution are two such methods, each yielding novel complexity results. First, we show that an integer point <span>(x^* in (K cap {mathbb {Z}}^n))</span> can be found in time <span>(2^{O(n)})</span>, provided that the <i>remainders</i> of each component <span>(x_i^* mod ell )</span> for some arbitrarily fixed <span>(ell ge 5(n+1))</span> of <span>(x^*)</span> are given. The algorithm is based on a <i>cutting-plane technique</i>, iteratively halving the volume of the feasible set. The cutting planes are determined via approximate integer programming. Enumeration of the possible remainders gives a <span>(2^{O(n)}n^n)</span> algorithm for general integer programming. This matches the current best bound of an algorithm by Dadush (Integer programming, lattice algorithms, and deterministic, vol. Estimation. Georgia Institute of Technology, Atlanta, 2012) that is considerably more involved. Our algorithm also relies on a new <i>asymmetric approximate Carathéodory theorem</i> that might be of interest on its own. Our second method concerns integer programming problems in equation-standard form <span>(Ax = b, 0 le x le u, , x in {mathbb {Z}}^n)</span>. Such a problem can be reduced to the solution of <span>(prod _i O(log u_i +1))</span> approximate integer programming problems. This implies, for example that <i>knapsack</i> or <i>subset-sum</i> problems with <i>polynomial variable range</i> <span>(0 le x_i le p(n))</span> can be solved in time <span>((log n)^{O(n)})</span>. For these problems, the best running time so far was <span>(n^n cdot 2^{O(n)})</span>.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"27 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140885825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Satoru Fujishige, Tomonari Kitahara, László A. Végh
{"title":"An update-and-stabilize framework for the minimum-norm-point problem","authors":"Satoru Fujishige, Tomonari Kitahara, László A. Végh","doi":"10.1007/s10107-024-02077-0","DOIUrl":"https://doi.org/10.1007/s10107-024-02077-0","url":null,"abstract":"<p>We consider the minimum-norm-point (MNP) problem over polyhedra, a well-studied problem that encompasses linear programming. We present a general algorithmic framework that combines two fundamental approaches for this problem: active set methods and first order methods. Our algorithm performs first order update steps, followed by iterations that aim to ‘stabilize’ the current iterate with additional projections, i.e., find a locally optimal solution whilst keeping the current tight inequalities. Such steps have been previously used in active set methods for the nonnegative least squares (NNLS) problem. We bound on the number of iterations polynomially in the dimension and in the associated circuit imbalance measure. In particular, the algorithm is strongly polynomial for network flow instances. Classical NNLS algorithms such as the Lawson–Hanson algorithm are special instantiations of our framework; as a consequence, we obtain convergence bounds for these algorithms. Our preliminary computational experiments show promising practical performance.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"27 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140885829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extended convergence analysis of the Scholtes-type regularization for cardinality-constrained optimization problems","authors":"Sebastian Lämmel, Vladimir Shikhman","doi":"10.1007/s10107-024-02082-3","DOIUrl":"https://doi.org/10.1007/s10107-024-02082-3","url":null,"abstract":"<p>We extend the convergence analysis of the Scholtes-type regularization method for cardinality-constrained optimization problems. Its behavior is clarified in the vicinity of saddle points, and not just of minimizers as it has been done in the literature before. This becomes possible by using as an intermediate step the recently introduced regularized continuous reformulation of a cardinality-constrained optimization problem. We show that the Scholtes-type regularization method is well-defined locally around a nondegenerate T-stationary point of this regularized continuous reformulation. Moreover, the nondegenerate Karush–Kuhn–Tucker points of the corresponding Scholtes-type regularization converge to a T-stationary point having the same index, i.e. its topological type persists. As consequence, we conclude that the global structure of the Scholtes-type regularization essentially coincides with that of CCOP.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"48 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140885824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Compressing branch-and-bound trees","authors":"Gonzalo Muñoz, Joseph Paat, Álinson S. Xavier","doi":"10.1007/s10107-024-02080-5","DOIUrl":"https://doi.org/10.1007/s10107-024-02080-5","url":null,"abstract":"<p>A branch-and-bound (BB) tree certifies a dual bound on the value of an integer program. In this work, we introduce the tree compression problem (TCP): <i>Given a BB tree</i> <i>T</i> <i>that certifies a dual bound, can we obtain a smaller tree with the same (or stronger) bound by either (1) applying a different disjunction at some node in</i> <i>T</i> <i>or (2) removing leaves from</i> <i>T</i>? We believe such post-hoc analysis of BB trees may assist in identifying helpful general disjunctions in BB algorithms. We initiate our study by considering computational complexity and limitations of TCP. We then conduct experiments to evaluate the compressibility of realistic branch-and-bound trees generated by commonly-used branching strategies, using both an exact and a heuristic compression algorithm.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"32 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140595792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alessandro Rudi, Ulysse Marteau-Ferey, Francis Bach
{"title":"Finding global minima via kernel approximations","authors":"Alessandro Rudi, Ulysse Marteau-Ferey, Francis Bach","doi":"10.1007/s10107-024-02081-4","DOIUrl":"https://doi.org/10.1007/s10107-024-02081-4","url":null,"abstract":"<p>We consider the global minimization of smooth functions based solely on function evaluations. Algorithms that achieve the optimal number of function evaluations for a given precision level typically rely on explicitly constructing an approximation of the function which is then minimized with algorithms that have exponential running-time complexity. In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum. This is done by using infinite sums of square smooth functions and has strong links with polynomial sum-of-squares hierarchies. Leveraging recent representation properties of reproducing kernel Hilbert spaces, the infinite-dimensional optimization problem can be solved by subsampling in time polynomial in the number of function evaluations, and with theoretical guarantees on the obtained minimum. Given <i>n</i> samples, the computational cost is <span>(O(n^{3.5}))</span> in time, <span>(O(n^2))</span> in space, and we achieve a convergence rate to the global optimum that is <span>(O(n^{-m/d + 1/2 + 3/d}))</span> where <i>m</i> is the degree of differentiability of the function and <i>d</i> the number of dimensions. The rate is nearly optimal in the case of Sobolev functions and more generally makes the proposed method particularly suitable for functions with many derivatives. Indeed, when <i>m</i> is in the order of <i>d</i>, the convergence rate to the global optimum does not suffer from the curse of dimensionality, which affects only the worst-case constants (that we track explicitly through the paper).\u0000</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":"2 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140595779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}