{"title":"Efficient gradient-based optimization for reconstructing binary images in applications to electrical impedance tomography","authors":"","doi":"10.1007/s10589-024-00553-z","DOIUrl":"https://doi.org/10.1007/s10589-024-00553-z","url":null,"abstract":"<h3>Abstract</h3> <p>A novel and highly efficient computational framework for reconstructing binary-type images suitable for models of various complexity seen in diverse biomedical applications is developed and validated. Efficiency in computational speed and accuracy is achieved by combining the advantages of recently developed optimization methods that use sample solutions with customized geometry and multiscale control space reduction, all paired with gradient-based techniques. The control space is effectively reduced based on the geometry of the samples and their individual contributions. The entire 3-step computational procedure has an easy-to-follow design due to a nominal number of tuning parameters making the approach simple for practical implementation in various settings. Fairly straightforward methods for computing gradients make the framework compatible with any optimization software, including black-box ones. The performance of the complete computational framework is tested in applications to 2D inverse problems of cancer detection by electrical impedance tomography (EIT) using data from models generated synthetically and obtained from medical images showing the natural development of cancerous regions of various sizes and shapes. The results demonstrate the superior performance of the new method and its high potential for improving the overall quality of the EIT-based procedures.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139753808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matheus Bernardelli de Moraes, Guilherme Palermo Coelho
{"title":"A benchmark generator for scenario-based discrete optimization","authors":"Matheus Bernardelli de Moraes, Guilherme Palermo Coelho","doi":"10.1007/s10589-024-00551-1","DOIUrl":"https://doi.org/10.1007/s10589-024-00551-1","url":null,"abstract":"<p>Multi-objective evolutionary algorithms (MOEAs) are a practical tool to solve non-linear problems with multiple objective functions. However, when applied to expensive black-box scenario-based optimization problems, MOEA’s performance becomes constrained due to computational or time limitations. Scenario-based optimization refers to problems that are subject to uncertainty, where each solution is evaluated over an ensemble of scenarios to reduce risks. A primary reason for MOEA’s failure is that algorithm development is challenging in these cases as many of these problems are black-box, high-dimensional, discrete, and computationally expensive. For this reason, this paper proposes a benchmark generator to create fast-to-compute scenario-based discrete test problems with different degrees of complexity. Our framework uses the structure of the Multi-Objective Knapsack Problem to create test problems that simulate characteristics of expensive scenario-based discrete problems. To validate our proposition, we tested four state-of-the-art MOEAs in 30 test instances generated with our framework, and the empirical results demonstrate that the suggested benchmark generator can analyze the ability of MOEAs in tackling expensive scenario-based discrete optimization problems.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139753903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A note on the convergence of deterministic gradient sampling in nonsmooth optimization","authors":"Bennet Gebken","doi":"10.1007/s10589-024-00552-0","DOIUrl":"https://doi.org/10.1007/s10589-024-00552-0","url":null,"abstract":"<p>Approximation of subdifferentials is one of the main tasks when computing descent directions for nonsmooth optimization problems. In this article, we propose a bisection method for weakly lower semismooth functions which is able to compute new subgradients that improve a given approximation in case a direction with insufficient descent was computed. Combined with a recently proposed deterministic gradient sampling approach, this yields a deterministic and provably convergent way to approximate subdifferentials for computing descent directions.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139753716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benjamin Beach, Robert Burlacu, Andreas Bärmann, Lukas Hager, Robert Hildebrand
{"title":"Enhancements of discretization approaches for non-convex mixed-integer quadratically constrained quadratic programming: Part I","authors":"Benjamin Beach, Robert Burlacu, Andreas Bärmann, Lukas Hager, Robert Hildebrand","doi":"10.1007/s10589-023-00543-7","DOIUrl":"https://doi.org/10.1007/s10589-023-00543-7","url":null,"abstract":"<p>We study mixed-integer programming (MIP) relaxation techniques for the solution of non-convex mixed-integer quadratically constrained quadratic programs (MIQCQPs). We present MIP relaxation methods for non-convex continuous variable products. In this paper, we consider MIP relaxations based on separable reformulation. The main focus is the introduction of the enhanced separable MIP relaxation for non-convex quadratic products of the form <span>(z=xy)</span>, called <i>hybrid separable</i> (HybS). Additionally, we introduce a logarithmic MIP relaxation for univariate quadratic terms, called <i>sawtooth relaxation</i>, based on Beach (Beach in J Glob Optim 84:869–912, 2022). We combine the latter with HybS and existing separable reformulations to derive MIP relaxations of MIQCQPs. We provide a comprehensive theoretical analysis of these techniques, underlining the theoretical advantages of HybS compared to its predecessors. We perform a broad computational study to demonstrate the effectiveness of the enhanced MIP relaxation in terms of producing tight dual bounds for MIQCQPs. In Part II, we study MIP relaxations that extend the MIP relaxation <i>normalized multiparametric disaggregation technique</i> (NMDT) (Castro in J Glob Optim 64:765–784, 2015) and present a computational study which also includes the MIP relaxations from this work and compares them with a state-of-the-art of MIQCQP solvers.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139646514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Alternative extension of the Hager–Zhang conjugate gradient method for vector optimization","authors":"Qingjie Hu, Liping Zhu, Yu Chen","doi":"10.1007/s10589-023-00548-2","DOIUrl":"https://doi.org/10.1007/s10589-023-00548-2","url":null,"abstract":"<p>Recently, Gonçalves and Prudente proposed an extension of the Hager–Zhang nonlinear conjugate gradient method for vector optimization (Comput Optim Appl 76:889–916, 2020). They initially demonstrated that directly extending the Hager–Zhang method for vector optimization may not result in descent in the vector sense, even when employing an exact line search. By utilizing a sufficiently accurate line search, they subsequently introduced a self-adjusting Hager–Zhang conjugate gradient method in the vector sense. The global convergence of this new scheme was proven without requiring regular restarts or any convex assumptions. In this paper, we propose an alternative extension of the Hager–Zhang nonlinear conjugate gradient method for vector optimization that preserves its desirable scalar property, i.e., ensuring sufficiently descent without relying on any line search or convexity assumption. Furthermore, we investigate its global convergence with the Wolfe line search under mild assumptions. Finally, numerical experiments are presented to illustrate the practical behavior of our proposed method.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139557926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A family of Barzilai-Borwein steplengths from the viewpoint of scaled total least squares","authors":"Shiru Li, Tao Zhang, Yong Xia","doi":"10.1007/s10589-023-00546-4","DOIUrl":"https://doi.org/10.1007/s10589-023-00546-4","url":null,"abstract":"<p>The Barzilai-Borwein (BB) steplengths play great roles in practical gradient methods for solving unconstrained optimization problems. Motivated by the observation that the two well-known BB steplengths correspond to the ordinary and the data least squares, respectively, we introduce a novel family of BB steplengths from the viewpoint of scaled total least squares. Numerical experiments demonstrate that high performance can be received by a carefully-selected BB steplength in the new family.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139497661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Max Grieshammer, Lukas Pflug, Michael Stingl, Andrian Uihlein
{"title":"Correction to: The continuous stochastic gradient method: part II–application and numerics","authors":"Max Grieshammer, Lukas Pflug, Michael Stingl, Andrian Uihlein","doi":"10.1007/s10589-023-00544-6","DOIUrl":"https://doi.org/10.1007/s10589-023-00544-6","url":null,"abstract":"","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139005130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Bregman–Kaczmarz method for nonlinear systems of equations","authors":"Robert Gower, Dirk A. Lorenz, Maximilian Winkler","doi":"10.1007/s10589-023-00541-9","DOIUrl":"https://doi.org/10.1007/s10589-023-00541-9","url":null,"abstract":"<p>We propose a new randomized method for solving systems of nonlinear equations, which can find sparse solutions or solutions under certain simple constraints. The scheme only takes gradients of component functions and uses Bregman projections onto the solution space of a Newton equation. In the special case of euclidean projections, the method is known as nonlinear Kaczmarz method. Furthermore if the component functions are nonnegative, we are in the setting of optimization under the interpolation assumption and the method reduces to SGD with the recently proposed stochastic Polyak step size. For general Bregman projections, our method is a stochastic mirror descent with a novel adaptive step size. We prove that in the convex setting each iteration of our method results in a smaller Bregman distance to exact solutions as compared to the standard Polyak step. Our generalization to Bregman projections comes with the price that a convex one-dimensional optimization problem needs to be solved in each iteration. This can typically be done with globalized Newton iterations. Convergence is proved in two classical settings of nonlinearity: for convex nonnegative functions and locally for functions which fulfill the tangential cone condition. Finally, we show examples in which the proposed method outperforms similar methods with the same memory requirements.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138557060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Max Grieshammer, Lukas Pflug, Michael Stingl, Andrian Uihlein
{"title":"The continuous stochastic gradient method: part II–application and numerics","authors":"Max Grieshammer, Lukas Pflug, Michael Stingl, Andrian Uihlein","doi":"10.1007/s10589-023-00540-w","DOIUrl":"https://doi.org/10.1007/s10589-023-00540-w","url":null,"abstract":"<p>In this contribution, we present a numerical analysis of the <i>continuous stochastic gradient</i> (CSG) method, including applications from topology optimization and convergence rates. In contrast to standard stochastic gradient optimization schemes, CSG does not discard old gradient samples from previous iterations. Instead, design dependent integration weights are calculated to form a convex combination as an approximation to the true gradient at the current design. As the approximation error vanishes in the course of the iterations, CSG represents a hybrid approach, starting off like a purely stochastic method and behaving like a full gradient scheme in the limit. In this work, the efficiency of CSG is demonstrated for practically relevant applications from topology optimization. These settings are characterized by both, a large number of optimization variables <i>and</i> an objective function, whose evaluation requires the numerical computation of multiple integrals concatenated in a nonlinear fashion. Such problems could not be solved by any existing optimization method before. Lastly, with regards to convergence rates, first estimates are provided and confirmed with the help of numerical experiments.\u0000</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138513594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}