{"title":"The Perturbation Method and Regularization of the Lagrange Multiplier Rule in Convex Constrained Optimization Problems","authors":"M. I. Sumin","doi":"10.1134/s0081543824030155","DOIUrl":null,"url":null,"abstract":"<p>We consider a regularization of the Lagrange multiplier rule (LMR) in the nondifferential form in a convex constrained optimization problem with an operator equality constraint in a Hilbert space and a finite number of functional inequality constraints. The objective functional of the problem is assumed to be strongly convex, and the convex closed set of its admissible elements also belongs to a Hilbert space. The constraints of the problem contain additively included parameters, which enables using the so-called perturbation method to study it. The main purpose of the regularized LMR is the stable generation of generalized minimizing sequences (GMSs), which approximate the exact solution of the problem using extremals of the regular Lagrange functional. The regularized LMR itself can be interpreted as a GMS-generating (regularizing) operator, which assigns to each set of input data of the constrained optimization problem the extremal of its corresponding regular Lagrange functional, in which the dual variable is generated following one or another procedure for stabilizing the dual problem. The main attention is paid to (1) studying the connection between the dual regularization procedure and the subdifferential properties of the value function of the original problem; (2) proving the convergence of this procedure in the case of solvability of the dual problem; (3) an appropriate update of the regularized LMR; (4) obtaining the classical LMR as a limiting version of its regularized analog. </p>","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1134/s0081543824030155","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We consider a regularization of the Lagrange multiplier rule (LMR) in the nondifferential form in a convex constrained optimization problem with an operator equality constraint in a Hilbert space and a finite number of functional inequality constraints. The objective functional of the problem is assumed to be strongly convex, and the convex closed set of its admissible elements also belongs to a Hilbert space. The constraints of the problem contain additively included parameters, which enables using the so-called perturbation method to study it. The main purpose of the regularized LMR is the stable generation of generalized minimizing sequences (GMSs), which approximate the exact solution of the problem using extremals of the regular Lagrange functional. The regularized LMR itself can be interpreted as a GMS-generating (regularizing) operator, which assigns to each set of input data of the constrained optimization problem the extremal of its corresponding regular Lagrange functional, in which the dual variable is generated following one or another procedure for stabilizing the dual problem. The main attention is paid to (1) studying the connection between the dual regularization procedure and the subdifferential properties of the value function of the original problem; (2) proving the convergence of this procedure in the case of solvability of the dual problem; (3) an appropriate update of the regularized LMR; (4) obtaining the classical LMR as a limiting version of its regularized analog.