Self-Supervised Learning of Iterative Solvers for Constrained Optimization

Lukas Lüken, Sergio Lucia
{"title":"Self-Supervised Learning of Iterative Solvers for Constrained Optimization","authors":"Lukas Lüken, Sergio Lucia","doi":"arxiv-2409.08066","DOIUrl":null,"url":null,"abstract":"Obtaining the solution of constrained optimization problems as a function of\nparameters is very important in a multitude of applications, such as control\nand planning. Solving such parametric optimization problems in real time can\npresent significant challenges, particularly when it is necessary to obtain\nhighly accurate solutions or batches of solutions. To solve these challenges,\nwe propose a learning-based iterative solver for constrained optimization which\ncan obtain very fast and accurate solutions by customizing the solver to a\nspecific parametric optimization problem. For a given set of parameters of the\nconstrained optimization problem, we propose a first step with a neural network\npredictor that outputs primal-dual solutions of a reasonable degree of\naccuracy. This primal-dual solution is then improved to a very high degree of\naccuracy in a second step by a learned iterative solver in the form of a neural\nnetwork. A novel loss function based on the Karush-Kuhn-Tucker conditions of\noptimality is introduced, enabling fully self-supervised training of both\nneural networks without the necessity of prior sampling of optimizer solutions.\nThe evaluation of a variety of quadratic and nonlinear parametric test problems\ndemonstrates that the predictor alone is already competitive with recent\nself-supervised schemes for approximating optimal solutions. The second step of\nour proposed learning-based iterative constrained optimizer achieves solutions\nwith orders of magnitude better accuracy than other learning-based approaches,\nwhile being faster to evaluate than state-of-the-art solvers and natively\nallowing for GPU parallelization.","PeriodicalId":501286,"journal":{"name":"arXiv - MATH - Optimization and Control","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - MATH - Optimization and Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08066","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Obtaining the solution of constrained optimization problems as a function of parameters is very important in a multitude of applications, such as control and planning. Solving such parametric optimization problems in real time can present significant challenges, particularly when it is necessary to obtain highly accurate solutions or batches of solutions. To solve these challenges, we propose a learning-based iterative solver for constrained optimization which can obtain very fast and accurate solutions by customizing the solver to a specific parametric optimization problem. For a given set of parameters of the constrained optimization problem, we propose a first step with a neural network predictor that outputs primal-dual solutions of a reasonable degree of accuracy. This primal-dual solution is then improved to a very high degree of accuracy in a second step by a learned iterative solver in the form of a neural network. A novel loss function based on the Karush-Kuhn-Tucker conditions of optimality is introduced, enabling fully self-supervised training of both neural networks without the necessity of prior sampling of optimizer solutions. The evaluation of a variety of quadratic and nonlinear parametric test problems demonstrates that the predictor alone is already competitive with recent self-supervised schemes for approximating optimal solutions. The second step of our proposed learning-based iterative constrained optimizer achieves solutions with orders of magnitude better accuracy than other learning-based approaches, while being faster to evaluate than state-of-the-art solvers and natively allowing for GPU parallelization.
约束优化迭代求解器的自监督学习
获取受限优化问题的解作为参数的函数在许多应用中都非常重要,例如控制土地规划。实时求解这类参数优化问题会带来巨大的挑战,尤其是在需要获得高精度解或成批解的情况下。为了解决这些难题,我们提出了一种基于学习的约束优化迭代求解器,通过针对特定参数优化问题定制求解器,可以获得非常快速和精确的解决方案。对于给定的约束优化问题参数集,我们建议第一步使用神经网络预测器,该预测器可输出具有合理准确度的原始二元解。然后,在第二步中,通过神经网络形式的学习迭代求解器,将原始二元解改进为精确度非常高的解。我们引入了一种基于卡鲁什-库恩-塔克最优条件的新型损失函数,从而实现了两个神经网络的完全自我监督训练,而无需事先对优化解进行采样。对各种二次参数和非线性参数测试问题的评估表明,预测器本身在逼近最优解方面已经可以与最近的自我监督方案相媲美。我们提出的基于学习的迭代约束优化器的第二步实现了比其他基于学习的方法更高精度的解决方案,同时其评估速度比最先进的求解器更快,并允许 GPU 并行化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信