{"title":"Affine optimal k-proper connected edge colorings","authors":"Robert D. Barish, Tetsuo Shibuya","doi":"10.1007/s11590-024-02111-2","DOIUrl":"https://doi.org/10.1007/s11590-024-02111-2","url":null,"abstract":"<p>We introduce <i>affine optimal</i> <i>k</i>-<i>proper connected edge colorings</i> as a variation on Fujita’s notion of <i>optimal</i> <i>k</i>-<i>proper connected colorings</i> (Fujita in Optim Lett 14(6):1371–1380, 2020. https://doi.org/10.1007/s11590-019-01442-9) with applications to the frequency assignment problem. Here, for a simple undirected graph <i>G</i> with edge set <span>(E_G)</span>, such a coloring corresponds to a decomposition of <span>(E_G)</span> into color classes <span>(C_1, C_2, ldots , C_n)</span>, with associated weights <span>(w_1, w_2, ldots , w_n)</span>, minimizing a specified affine function <span>({mathcal {A}}, {:=},sum _{i=1}^{n} left( w_i cdot |C_i|right))</span>, while also ensuring the existence of <i>k</i> vertex disjoint <i>proper paths</i> (i.e., simple paths with no two adjacent edges in the same color class) between all pairs of vertices. In this context, we define <span>(zeta _{{mathcal {A}}}^k(G))</span> as the minimum possible value of <span>({mathcal {A}})</span> under a <i>k</i>-proper connectivity requirement. For any fixed number of color classes, we show that computing <span>(zeta _{{mathcal {A}}}^k(G))</span> is treewidth fixed parameter tractable. However, we also show that determining <span>(zeta _{{mathcal {A}}^{prime }}^k(G))</span> with the affine function <span>({mathcal {A}}^{prime } , {:=},0 cdot |C_1| + |C_2|)</span> is <i>NP</i>-hard for 2-connected planar graphs in the case where <span>(k = 1)</span>, cubic 3-connected planar graphs for <span>(k = 2)</span>, and <i>k</i>-connected graphs <span>(forall k ge 3)</span>. We also show that no fully polynomial-time randomized approximation scheme can exist for approximating <span>(zeta _{{mathcal {A}}^{prime }}^k(G))</span> under any of the aforementioned constraints unless <span>(NP=RP)</span>.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"55 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140588246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An improvement of the Goldstein line search","authors":"Arnold Neumaier, Morteza Kimiaei","doi":"10.1007/s11590-024-02110-3","DOIUrl":"https://doi.org/10.1007/s11590-024-02110-3","url":null,"abstract":"<p>This paper introduces <span>CLS</span>, a new line search along an arbitrary smooth search path, that starts at the current iterate tangentially to a descent direction. Like the Goldstein line search and unlike the Wolfe line search, the new line search uses, beyond the gradient at the current iterate, only function values. Using this line search with search directions satisfying the bounded angle condition, global convergence to a stationary point is proved for continuously differentiable objective functions that are bounded below and have Lipschitz continuous gradients. The standard complexity bounds are proved under several natural assumptions.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"177 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140588068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An approximation algorithm for k-level squared metric facility location problem with outliers","authors":"Li Zhang, Jing Yuan, Qiaoliang Li","doi":"10.1007/s11590-024-02107-y","DOIUrl":"https://doi.org/10.1007/s11590-024-02107-y","url":null,"abstract":"<p>We investigate <i>k</i>-level squared metric facility location problem with outliers (<i>k</i>-SMFLPWO) for any constant <i>k</i>. In <i>k</i>-SMFLPWO, given <i>k</i> facilities set <span>({mathcal {F}}_{l})</span>, where <span>(lin {1, 2, cdots , k})</span>, clients set <span>({mathcal {C}})</span> with cardinality <i>n</i> and a non-negative integer <span>(q<n)</span>. The sum of opening and connection cost will be substantially increased by distant clients. To minimize the total cost, some distant clients can not be connected, in short, at least <span>(n-q)</span> clients in clients set <span>({mathcal {C}})</span> are connected to the path <span>(p=(i_{1}in {mathcal {F}}_{1}, i_{2}in {mathcal {F}}_{2}, cdots , i_{k}in {mathcal {F}}_{k}))</span> where the facilities in path <i>p</i> are opened. Based on primal-dual approximation algorithm and the property of squared metric triangle inequality, we present a constant factor approximation algorithm for <i>k</i>-SMFLPWO.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"39 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140588071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On penalized reload cost path, walk, tour and maximum flow: hardness and approximation","authors":"Donatella Granata","doi":"10.1007/s11590-024-02108-x","DOIUrl":"https://doi.org/10.1007/s11590-024-02108-x","url":null,"abstract":"<p>A meticulous description of a real network with respect to its heterogeneous physical infrastructure and properties is necessary for network design assessment. Quantifying the costs of making these structures work together effectively, and taking into account any hidden charges they may incur, can lead to improve the quality of service and reduce mandatory maintenance requirements, and mitigate the cost associated with finding a valid solution. For these reasons, we devote our attention to a novel approach to produce a more complete representation of the overall costs on the reload cost network. This approach considers both the cost of reloading due to linking structures and their internal charges, which we refer to as the <i>penalized reload cost</i>. We investigate the complexity and approximability of finding an optimal path, walk, tour, and maximum flow problems under <i>penalized reload cost</i>. All these problems turn out to be NP-complete. We prove that, unless P=NP, even if the reload cost matrix is symmetric and satisfies the triangle inequality, the problem of finding a path, tour, and a maximum flow with a minimum <i>penalized reload cost</i> cannot be approximated within any constant <span>(alpha <2)</span>, and finding a walk is not approximable within any factor <span>(beta le 3)</span>.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"1 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140588191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Decentralized bilevel optimization","authors":"Xuxing Chen, Minhui Huang, Shiqian Ma","doi":"10.1007/s11590-024-02101-4","DOIUrl":"https://doi.org/10.1007/s11590-024-02101-4","url":null,"abstract":"<p>Bilevel optimization has been successfully applied to many important machine learning problems. Algorithms for solving bilevel optimization have been studied under various settings. In this paper, we study the nonconvex-strongly-convex bilevel optimization under a decentralized setting. We design decentralized algorithms for both deterministic and stochastic bilevel optimization problems. Moreover, we analyze the convergence rates of the proposed algorithms in difference scenarios including the case where data heterogeneity is observed across agents. Numerical experiments on both synthetic and real data demonstrate that the proposed methods are efficient.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"41 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140316156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the implementation of ADMM with dynamically configurable parameter for the separable $$ell _{1}/ell _{2}$$ minimization","authors":"Jun Wang, Qiang Ma","doi":"10.1007/s11590-024-02106-z","DOIUrl":"https://doi.org/10.1007/s11590-024-02106-z","url":null,"abstract":"<p>In this paper, we propose a novel variant of the alternating direction method of multipliers (ADMM) approach for solving minimization of the rate of <span>(ell _{1})</span> and <span>(ell _{2})</span> norms for sparse recovery. We first transform the quotient of <span>(ell _{1})</span> and <span>(ell _{2})</span> norms into a new function of the separable variables using the least squares minimum norm solution of the linear system of equations. Subsequently, we employ the augmented Lagrangian function to formulate the corresponding ADMM method with a dynamically adjustable parameter. Additionally, each of its subproblems possesses a unique global minimum. Finally, we present some numerical experiments to demonstrate our results.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"40 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140316529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The projected splitting iterative methods based on tensor splitting and its majorization matrix splitting for the tensor complementarity problem","authors":"Mengxiao Fan, Jicheng Li","doi":"10.1007/s11590-024-02104-1","DOIUrl":"https://doi.org/10.1007/s11590-024-02104-1","url":null,"abstract":"<p>In this paper, we develop two kinds of the projected iterative methods for the tensor complementarity problem combining two different splitting frameworks. The first method is on the basis of tensor splitting, and its monotone convergence is proved based on the <span>({mathcal{L}})</span>-tensor and the strongly monotone tensor. Meanwhile, an alternative method is in the light of majorization matrix splitting, the convergence of which is given and is particularly analyzed based on the power Lipschitz tensor. Some numerical examples are tested to illustrate the proposed methods.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"77 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140300401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Subdifferentials of convex matrix-valued functions","authors":"","doi":"10.1007/s11590-024-02105-0","DOIUrl":"https://doi.org/10.1007/s11590-024-02105-0","url":null,"abstract":"<h3>Abstract</h3> <p>Subdifferentials (in the sense of convex analysis) of matrix-valued functions defined on <span> <span>(mathbb {R}^d)</span> </span> that are convex with respect to the Löwner partial order can have a complicated structure and might be very difficult to compute even in simple cases. The aim of this paper is to study subdifferential calculus for such functions and properties of their subdifferentials. We show that many standard results from convex analysis no longer hold true in the matrix-valued case. For example, in this case the subdifferential of the sum is not equal to the sum of subdifferentials, the Clarke subdifferential is not equal to the subdifferential in the sense of convex analysis, etc. Nonetheless, it is possible to provide simple rules for computing nonempty subsets of subdifferentials (in particular, individual subgradients) of convex matrix-valued functions in the general case and to completely describe subdifferentials of such functions defined on the real line. As a by-product of our analysis, we derive some interesting properties of convex matrix-valued functions, e.g. we show that if such function is nonsmooth, then its diagonal elements must be nonsmooth as well.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"9 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140199501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alvaro Sierra-Altamiranda, Hadi Charkhgard, Iman Dayarian, Ali Eshragh, Sorna Javadi
{"title":"Learning to project in a criterion space search algorithm: an application to multi-objective binary linear programming","authors":"Alvaro Sierra-Altamiranda, Hadi Charkhgard, Iman Dayarian, Ali Eshragh, Sorna Javadi","doi":"10.1007/s11590-024-02100-5","DOIUrl":"https://doi.org/10.1007/s11590-024-02100-5","url":null,"abstract":"<p>In this paper, we investigate the possibility of improving the performance of multi-objective optimization solution approaches using machine learning techniques. Specifically, we focus on multi-objective binary linear programs and employ one of the most effective and recently developed criterion space search algorithms, the so-called KSA, during our study. This algorithm computes all nondominated points of a problem with <i>p</i> objectives by searching on a projected criterion space, i.e., a <span>((p-1))</span>-dimensional criterion apace. We present an effective and fast learning approach to identify on which projected space the KSA should work. We also present several generic features/variables that can be used in machine learning techniques for identifying the best projected space. Finally, we present an effective bi-objective optimization-based heuristic for selecting the subset of the features to overcome the issue of overfitting in learning. Through an extensive computational study over 2000 instances of tri-objective knapsack and assignment problems, we demonstrate that an improvement of up to 18% in time can be achieved by the proposed learning method compared to a random selection of the projected space. To show that the performance of our algorithm is not limited to instances of knapsack and assignment problems with three objective functions, we also report similar performance results when the proposed learning approach is used for solving random binary integer program instances with four objective functions.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"16 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140171236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Convergence analysis of the DFP algorithm for unconstrained optimization problems on Riemannian manifolds","authors":"Xiao-bo Li, Kai Tu, Jian Lu","doi":"10.1007/s11590-024-02103-2","DOIUrl":"https://doi.org/10.1007/s11590-024-02103-2","url":null,"abstract":"<p>In this paper, we propose the DFP algorithm with inexact line search for unconstrained optimization problems on Riemannian manifolds. Under some reasonable conditions, the global convergence result is established and the superlinear local convergence rate of the DFP algorithm is proved on Riemannian manifolds. The preliminary computational experiment is also reported to illustrate the effectiveness of the DFP algorithm.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"12 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140114921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}