{"title":"Self-Supervised Learning of Iterative Solvers for Constrained Optimization","authors":"Lukas Lüken, Sergio Lucia","doi":"arxiv-2409.08066","DOIUrl":null,"url":null,"abstract":"Obtaining the solution of constrained optimization problems as a function of\nparameters is very important in a multitude of applications, such as control\nand planning. Solving such parametric optimization problems in real time can\npresent significant challenges, particularly when it is necessary to obtain\nhighly accurate solutions or batches of solutions. To solve these challenges,\nwe propose a learning-based iterative solver for constrained optimization which\ncan obtain very fast and accurate solutions by customizing the solver to a\nspecific parametric optimization problem. For a given set of parameters of the\nconstrained optimization problem, we propose a first step with a neural network\npredictor that outputs primal-dual solutions of a reasonable degree of\naccuracy. This primal-dual solution is then improved to a very high degree of\naccuracy in a second step by a learned iterative solver in the form of a neural\nnetwork. A novel loss function based on the Karush-Kuhn-Tucker conditions of\noptimality is introduced, enabling fully self-supervised training of both\nneural networks without the necessity of prior sampling of optimizer solutions.\nThe evaluation of a variety of quadratic and nonlinear parametric test problems\ndemonstrates that the predictor alone is already competitive with recent\nself-supervised schemes for approximating optimal solutions. The second step of\nour proposed learning-based iterative constrained optimizer achieves solutions\nwith orders of magnitude better accuracy than other learning-based approaches,\nwhile being faster to evaluate than state-of-the-art solvers and natively\nallowing for GPU parallelization.","PeriodicalId":501286,"journal":{"name":"arXiv - MATH - Optimization and Control","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - MATH - Optimization and Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08066","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Obtaining the solution of constrained optimization problems as a function of
parameters is very important in a multitude of applications, such as control
and planning. Solving such parametric optimization problems in real time can
present significant challenges, particularly when it is necessary to obtain
highly accurate solutions or batches of solutions. To solve these challenges,
we propose a learning-based iterative solver for constrained optimization which
can obtain very fast and accurate solutions by customizing the solver to a
specific parametric optimization problem. For a given set of parameters of the
constrained optimization problem, we propose a first step with a neural network
predictor that outputs primal-dual solutions of a reasonable degree of
accuracy. This primal-dual solution is then improved to a very high degree of
accuracy in a second step by a learned iterative solver in the form of a neural
network. A novel loss function based on the Karush-Kuhn-Tucker conditions of
optimality is introduced, enabling fully self-supervised training of both
neural networks without the necessity of prior sampling of optimizer solutions.
The evaluation of a variety of quadratic and nonlinear parametric test problems
demonstrates that the predictor alone is already competitive with recent
self-supervised schemes for approximating optimal solutions. The second step of
our proposed learning-based iterative constrained optimizer achieves solutions
with orders of magnitude better accuracy than other learning-based approaches,
while being faster to evaluate than state-of-the-art solvers and natively
allowing for GPU parallelization.