Jacek Szybowski, Konrad Kułakowski, Sebastian Ernst
{"title":"Almost optimal manipulation of pairwise comparisons of alternatives","authors":"Jacek Szybowski, Konrad Kułakowski, Sebastian Ernst","doi":"10.1007/s10898-024-01391-3","DOIUrl":"https://doi.org/10.1007/s10898-024-01391-3","url":null,"abstract":"<p>The role of an expert in the decision-making process is crucial. If we ask an expert to help us to make a decision we assume their honesty. But what if the expert is dishonest? Then, the answer on how difficult it is for an expert to provide manipulated data in a given case of decision-making process becomes essential. In the presented work, we consider manipulation of a ranking obtained by the Geometric Mean Method applied to a pairwise comparisons matrix. More specifically, we propose an algorithm for finding an almost optimal way to swap the positions of two selected alternatives in a ranking. We also define a new index which measures how difficult such manipulation is in a given case.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"32 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140562220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimality conditions and sensitivity analysis in parametric nonconvex minimax programming","authors":"D. T. V. An, N. H. Hung, D. T. Ngoan, N. V. Tuyen","doi":"10.1007/s10898-024-01388-y","DOIUrl":"https://doi.org/10.1007/s10898-024-01388-y","url":null,"abstract":"<p>In this paper, we perform optimality conditions and sensitivity analysis for parametric nonconvex minimax programming problems. Our aim is to study the necessary optimality conditions by using the Mordukhovich (limiting) subdifferential and to give upper estimations for the Mordukhovich subdifferential of the optimal value function in the problem under consideration. The optimality conditions and sensitivity analysis are obtained by using upper estimates for Mordukhovich subdifferentials of the maximum function. The results on optimality conditions are then applied to parametric multiobjective optimization problems. An example is given to illustrate our results.\u0000</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"26 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140562224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An approximation proximal gradient algorithm for nonconvex-linear minimax problems with nonconvex nonsmooth terms","authors":"Jiefei He, Huiling Zhang, Zi Xu","doi":"10.1007/s10898-024-01383-3","DOIUrl":"https://doi.org/10.1007/s10898-024-01383-3","url":null,"abstract":"<p>Nonconvex minimax problems have attracted significant attention in machine learning, wireless communication and many other fields. In this paper, we propose an efficient approximation proximal gradient algorithm for solving a class of nonsmooth nonconvex-linear minimax problems with a nonconvex nonsmooth term, and the number of iteration to find an <span>(varepsilon )</span>-stationary point is upper bounded by <span>({mathcal {O}}(varepsilon ^{-3}))</span>. Some numerical results on one-bit precoding problem in massive MIMO system and a distributed non-convex optimization problem demonstrate the effectiveness of the proposed algorithm.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"1 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140300455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A partial Bregman ADMM with a general relaxation factor for structured nonconvex and nonsmooth optimization","authors":"Jianghua Yin, Chunming Tang, Jinbao Jian, Qiongxuan Huang","doi":"10.1007/s10898-024-01384-2","DOIUrl":"https://doi.org/10.1007/s10898-024-01384-2","url":null,"abstract":"<p>In this paper, a partial Bregman alternating direction method of multipliers (ADMM) with a general relaxation factor <span>(alpha in (0,frac{1+sqrt{5}}{2}))</span> is proposed for structured nonconvex and nonsmooth optimization, where the objective function is the sum of a nonsmooth convex function and a smooth nonconvex function without coupled variables. We add a Bregman distance to alleviate the difficulty of solving the nonsmooth subproblem. For the smooth subproblem, we directly perform a gradient descent step of the augmented Lagrangian function, which makes the computational cost of each iteration of our method very cheap. To our knowledge, the nonconvex ADMM with a relaxation factor <span>(alpha ne 1)</span> in the literature has never been studied for the problem under consideration. Under some mild conditions, the boundedness of the generated sequence, the global convergence and the iteration complexity are established. The numerical results verify the efficiency and robustness of the proposed method.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"11 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140199916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modification and improved implementation of the RPD method for computing state relaxations for global dynamic optimization","authors":"","doi":"10.1007/s10898-024-01381-5","DOIUrl":"https://doi.org/10.1007/s10898-024-01381-5","url":null,"abstract":"<h3>Abstract</h3> <p>This paper presents an improved method for computing convex and concave relaxations of the parametric solutions of ordinary differential equations (ODEs). These are called state relaxations and are crucial for solving dynamic optimization problems to global optimality via branch-and-bound (B &B). The new method improves upon an existing approach known as relaxation preserving dynamics (RPD). RPD is generally considered to be among the best available methods for computing state relaxations in terms of both efficiency and accuracy. However, it requires the solution of a hybrid dynamical system, whereas other similar methods only require the solution of a simple system of ODEs. This is problematic in the context of branch-and-bound because it leads to higher cost and reduced reliability (i.e., invalid relaxations can result if hybrid mode switches are not detected numerically). Moreover, there is no known sensitivity theory for the RPD hybrid system. This makes it impossible to compute subgradients of the RPD relaxations, which are essential for efficiently solving the associated B &B lower bounding problems. To address these limitations, this paper presents a small but important modification of the RPD theory, and a corresponding modification of its numerical implementation, that crucially allows state relaxations to be computed by solving a system of ODEs rather than a hybrid system. This new RPD method is then compared to the original using two examples and shown to be more efficient, more robust, and of almost identical accuracy.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"102 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140199824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ouyang Wu, Pavlo Muts, Ivo Nowak, Eligius M. T. Hendrix
{"title":"On the use of overlapping convex hull relaxations to solve nonconvex MINLPs","authors":"Ouyang Wu, Pavlo Muts, Ivo Nowak, Eligius M. T. Hendrix","doi":"10.1007/s10898-024-01376-2","DOIUrl":"https://doi.org/10.1007/s10898-024-01376-2","url":null,"abstract":"<p>We present a novel relaxation for general nonconvex sparse MINLP problems, called overlapping convex hull relaxation (CHR). It is defined by replacing all nonlinear constraint sets by their convex hulls. If the convex hulls are disjunctive, e.g. if the MINLP is block-separable, the CHR is equivalent to the convex hull relaxation obtained by (standard) column generation (CG). The CHR can be used for computing an initial lower bound in the root node of a branch-and-bound algorithm, or for computing a start vector for a local-search-based MINLP heuristic. We describe a dynamic block and column generation (DBCG) MINLP algorithm to generate the CHR by dynamically adding aggregated blocks. The idea of adding aggregated blocks in the CHR is similar to the well-known cutting plane approach. Numerical experiments on nonconvex MINLP instances show that the duality gap can be significantly reduced with the results of CHRs. DBCG is implemented as part of the CG-MINLP framework Decogo, see https://decogo.readthedocs.io/en/latest/index.html.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"8 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140199962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dmitry Gribanov, Ivan Shumilov, Dmitry Malyshev, Nikolai Zolotykh
{"title":"Faster algorithms for sparse ILP and hypergraph multi-packing/multi-cover problems","authors":"Dmitry Gribanov, Ivan Shumilov, Dmitry Malyshev, Nikolai Zolotykh","doi":"10.1007/s10898-024-01379-z","DOIUrl":"https://doi.org/10.1007/s10898-024-01379-z","url":null,"abstract":"<p>In our paper, we consider the following general problems: check feasibility, count the number of feasible solutions, find an optimal solution, and count the number of optimal solutions in <span>({{,mathrm{mathcal {P}},}}cap {{,mathrm{mathbb {Z}},}}^n)</span>, assuming that <span>({{,mathrm{mathcal {P}},}})</span> is a polyhedron, defined by systems <span>(A x le b)</span> or <span>(Ax = b,, x ge 0)</span> with a sparse matrix <i>A</i>. We develop algorithms for these problems that outperform state-of-the-art ILP and counting algorithms on sparse instances with bounded elements in terms of the computational complexity. Assuming that the matrix <i>A</i> has bounded elements, our complexity bounds have the form <span>(s^{O(n)})</span>, where <i>s</i> is the minimum between numbers of non-zeroes in columns and rows of <i>A</i>, respectively. For <span>(s = obigl (log n bigr ))</span>, this bound outperforms the state-of-the-art ILP feasibility complexity bound <span>((log n)^{O(n)})</span>, due to Reis & Rothvoss (in: 2023 IEEE 64th Annual symposium on foundations of computer science (FOCS), IEEE, pp. 974–988). For <span>(s = phi ^{o(log n)})</span>, where <span>(phi )</span> denotes the input bit-encoding length, it outperforms the state-of-the-art ILP counting complexity bound <span>(phi ^{O(n log n)})</span>, due to Barvinok et al. (in: Proceedings of 1993 IEEE 34th annual foundations of computer science, pp. 566–572, https://doi.org/10.1109/SFCS.1993.366830, 1993), Dyer, Kannan (Math Oper Res 22(3):545–549, https://doi.org/10.1287/moor.22.3.545, 1997), Barvinok, Pommersheim (Algebr Combin 38:91–147, 1999), Barvinok (in: European Mathematical Society, ETH-Zentrum, Zurich, 2008). We use known and new methods to develop new exponential algorithms for <i>Edge/Vertex Multi-Packing/Multi-Cover Problems</i> on graphs and hypergraphs. This framework consists of many different problems, such as the <i>Stable Multi-set</i>, <i>Vertex Multi-cover</i>, <i>Dominating Multi-set</i>, <i>Set Multi-cover</i>, <i>Multi-set Multi-cover</i>, and <i>Hypergraph Multi-matching</i> problems, which are natural generalizations of the standard <i>Stable Set</i>, <i>Vertex Cover</i>, <i>Dominating Set</i>, <i>Set Cover</i>, and <i>Maximum Matching</i> problems.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"28 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140199919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anveksha Moar, Pradeep Kumar Sharma, C. S. Lalitha
{"title":"Nonlinear scalarization in set optimization based on the concept of null set","authors":"Anveksha Moar, Pradeep Kumar Sharma, C. S. Lalitha","doi":"10.1007/s10898-024-01385-1","DOIUrl":"https://doi.org/10.1007/s10898-024-01385-1","url":null,"abstract":"<p>The aim of this paper is to introduce a nonlinear scalarization function in set optimization based on the concept of null set which was introduced by Wu (J Math Anal Appl 472(2):1741–1761, 2019). We introduce a notion of pseudo algebraic interior of a set and define a weak set order relation using the concept of null set. We investigate several properties of this nonlinear scalarization function. Further, we characterize the set order relations and investigate optimality conditions for solution sets in set optimization based on the concept of null set. Finally, a numerical example is provided to compute a weak minimal solution using this nonlinear scalarization function.\u0000</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"92 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140199963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An inertial ADMM for a class of nonconvex composite optimization with nonlinear coupling constraints","authors":"Le Thi Khanh Hien, Dimitri Papadimitriou","doi":"10.1007/s10898-024-01382-4","DOIUrl":"https://doi.org/10.1007/s10898-024-01382-4","url":null,"abstract":"<p>In this paper, we propose an inertial alternating direction method of multipliers for solving a class of non-convex multi-block optimization problems with <i>nonlinear coupling constraints</i>. Distinctive features of our proposed method, when compared with other alternating direction methods of multipliers for solving non-convex problems with nonlinear coupling constraints, include: (i) we apply the inertial technique to the update of primal variables and (ii) we apply a non-standard update rule for the multiplier by scaling the multiplier by a factor before moving along the ascent direction where a relaxation parameter is allowed. Subsequential convergence and global convergence are presented for the proposed algorithm.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"34 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140168165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Convergence and worst-case complexity of adaptive Riemannian trust-region methods for optimization on manifolds","authors":"Zhou Sheng, Gonglin Yuan","doi":"10.1007/s10898-024-01378-0","DOIUrl":"https://doi.org/10.1007/s10898-024-01378-0","url":null,"abstract":"<p>Trust-region methods have received massive attention in a variety of continuous optimization. They aim to obtain a trial step by minimizing a quadratic model in a region of a certain trust-region radius around the current iterate. This paper proposes an adaptive Riemannian trust-region algorithm for optimization on manifolds, in which the trust-region radius depends linearly on the norm of the Riemannian gradient at each iteration. Under mild assumptions, we establish the liminf-type convergence, lim-type convergence, and global convergence results of the proposed algorithm. In addition, the proposed algorithm is shown to reach the conclusion that the norm of the Riemannian gradient is smaller than <span>(epsilon )</span> within <span>({mathcal {O}}(frac{1}{epsilon ^2}))</span> iterations. Some numerical examples of tensor approximations are carried out to reveal the performances of the proposed algorithm compared to the classical Riemannian trust-region algorithm.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"27 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140154986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}