A Stable Solution of a Nonuniformly Perturbed Quadratic Minimization Problem by the Extragradient Method with Step Size Separated from Zero

Pub Date : 2024-08-20 DOI:10.1134/s0081543824030027
L. A. Artem’eva, A. A. Dryazhenkov, M. M. Potapov
{"title":"A Stable Solution of a Nonuniformly Perturbed Quadratic Minimization Problem by the Extragradient Method with Step Size Separated from Zero","authors":"L. A. Artem’eva, A. A. Dryazhenkov, M. M. Potapov","doi":"10.1134/s0081543824030027","DOIUrl":null,"url":null,"abstract":"<p>A quadratic minimization problem is considered in Hilbert spaces under constraints given by a linear operator equation and a convex quadratic inequality. The main feature of the problem statement is that the practically available approximations to the exact linear operators specifying the criterion and the constraints converge to them only strongly pointwise rather than in the uniform operator norm, which makes it impossible to justify the use of the classical regularization methods. We propose a regularization method that is applicable in the presence of error estimates for approximate operators in pairs of other operator norms, which are weaker than the original ones. For each of the operators, the pair of corresponding weakened operator norms is obtained by strengthening the norm in the domain of the operator and weakening the norm in its range. The weakening of operator norms usually makes it possible to estimate errors in operators where this was fundamentally impossible in the original norms, for example, in the finite-dimensional approximation of a noncompact operator. From the original optimization formulation, a transition is made to the problem of finding a saddle point of the Lagrange function. The proposed numerical method for finding a saddle point is an iterative regularized extragradient two-stage procedure. At the first stage of each iteration, an approximation to the optimal value of the criterion is refined; at the second stage, the approximate solution with respect to the main variable is refined. Compared to the methods previously developed by the authors and working under similar information conditions, this method is preferable for practical implementation, since it does not require the gradient step size to converge to zero. The main result of the work is the proof of the strong convergence of the approximations generated by the method to one of the exact solutions to the original problem in the norm of the original space.\n</p>","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1134/s0081543824030027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

A quadratic minimization problem is considered in Hilbert spaces under constraints given by a linear operator equation and a convex quadratic inequality. The main feature of the problem statement is that the practically available approximations to the exact linear operators specifying the criterion and the constraints converge to them only strongly pointwise rather than in the uniform operator norm, which makes it impossible to justify the use of the classical regularization methods. We propose a regularization method that is applicable in the presence of error estimates for approximate operators in pairs of other operator norms, which are weaker than the original ones. For each of the operators, the pair of corresponding weakened operator norms is obtained by strengthening the norm in the domain of the operator and weakening the norm in its range. The weakening of operator norms usually makes it possible to estimate errors in operators where this was fundamentally impossible in the original norms, for example, in the finite-dimensional approximation of a noncompact operator. From the original optimization formulation, a transition is made to the problem of finding a saddle point of the Lagrange function. The proposed numerical method for finding a saddle point is an iterative regularized extragradient two-stage procedure. At the first stage of each iteration, an approximation to the optimal value of the criterion is refined; at the second stage, the approximate solution with respect to the main variable is refined. Compared to the methods previously developed by the authors and working under similar information conditions, this method is preferable for practical implementation, since it does not require the gradient step size to converge to zero. The main result of the work is the proof of the strong convergence of the approximations generated by the method to one of the exact solutions to the original problem in the norm of the original space.

分享
查看原文
用步长从零开始的外梯度法稳定求解非均匀扰动二次最小化问题
在线性算子方程和凸二次不等式给出的约束条件下,考虑了希尔伯特空间中的二次最小化问题。问题陈述的主要特点是,实际可用的近似精确线性算子(指定准则和约束条件)只能以强点式而非统一算子规范收敛于它们,这就无法证明使用经典正则化方法的合理性。我们提出了一种正则化方法,它适用于近似算子在其他算子规范对中的误差估计,这些规范比原始规范弱。对于每个算子,通过加强算子域内的规范和削弱其范围内的规范,可以得到一对相应的削弱算子规范。削弱算子规范通常可以估计算子的误差,而这在原始规范中是根本不可能实现的,例如,在非紧凑算子的有限维近似中。从最初的优化表述过渡到寻找拉格朗日函数的鞍点问题。所提出的寻找鞍点的数值方法是一种迭代正则化外梯度两阶段程序。在每次迭代的第一阶段,细化准则最优值的近似值;在第二阶段,细化主要变量的近似解。与作者之前开发的在类似信息条件下工作的方法相比,这种方法更适合实际应用,因为它不要求梯度步长收敛为零。这项工作的主要成果是证明了该方法产生的近似值在原始空间的规范下对原始问题的一个精确解具有很强的收敛性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信