{"title":"具有Bregman距离的牛顿法梯度正则化","authors":"Nikita Doikov, Yurii Nesterov","doi":"10.1007/s10107-023-01943-7","DOIUrl":null,"url":null,"abstract":"<p><p>In this paper, we propose a first second-order scheme based on arbitrary non-Euclidean norms, incorporated by Bregman distances. They are introduced directly in the Newton iterate with regularization parameter proportional to the square root of the norm of the current gradient. For the basic scheme, as applied to the composite convex optimization problem, we establish the global convergence rate of the order <math><mrow><mi>O</mi><mo>(</mo><msup><mi>k</mi><mrow><mo>-</mo><mn>2</mn></mrow></msup><mo>)</mo></mrow></math> both in terms of the functional residual and in the norm of subgradients. Our main assumption on the smooth part of the objective is Lipschitz continuity of its Hessian. For uniformly convex functions of degree three, we justify global linear rate, and for strongly convex function we prove the local superlinear rate of convergence. Our approach can be seen as a relaxation of the Cubic Regularization of the Newton method (Nesterov and Polyak in Math Program 108(1):177-205, 2006) for convex minimization problems. This relaxation preserves the convergence properties and global complexities of the Cubic Newton in convex case, while the auxiliary subproblem at each iteration is simpler. We equip our method with adaptive search procedure for choosing the regularization parameter. We propose also an accelerated scheme with convergence rate <math><mrow><mi>O</mi><mo>(</mo><msup><mi>k</mi><mrow><mo>-</mo><mn>3</mn></mrow></msup><mo>)</mo></mrow></math>, where <i>k</i> is the iteration counter.</p>","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10869408/pdf/","citationCount":"0","resultStr":"{\"title\":\"Gradient regularization of Newton method with Bregman distances.\",\"authors\":\"Nikita Doikov, Yurii Nesterov\",\"doi\":\"10.1007/s10107-023-01943-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>In this paper, we propose a first second-order scheme based on arbitrary non-Euclidean norms, incorporated by Bregman distances. They are introduced directly in the Newton iterate with regularization parameter proportional to the square root of the norm of the current gradient. For the basic scheme, as applied to the composite convex optimization problem, we establish the global convergence rate of the order <math><mrow><mi>O</mi><mo>(</mo><msup><mi>k</mi><mrow><mo>-</mo><mn>2</mn></mrow></msup><mo>)</mo></mrow></math> both in terms of the functional residual and in the norm of subgradients. Our main assumption on the smooth part of the objective is Lipschitz continuity of its Hessian. For uniformly convex functions of degree three, we justify global linear rate, and for strongly convex function we prove the local superlinear rate of convergence. Our approach can be seen as a relaxation of the Cubic Regularization of the Newton method (Nesterov and Polyak in Math Program 108(1):177-205, 2006) for convex minimization problems. This relaxation preserves the convergence properties and global complexities of the Cubic Newton in convex case, while the auxiliary subproblem at each iteration is simpler. We equip our method with adaptive search procedure for choosing the regularization parameter. We propose also an accelerated scheme with convergence rate <math><mrow><mi>O</mi><mo>(</mo><msup><mi>k</mi><mrow><mo>-</mo><mn>3</mn></mrow></msup><mo>)</mo></mrow></math>, where <i>k</i> is the iteration counter.</p>\",\"PeriodicalId\":2,\"journal\":{\"name\":\"ACS Applied Bio Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10869408/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Bio Materials\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1007/s10107-023-01943-7\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/3/24 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q2\",\"JCRName\":\"MATERIALS SCIENCE, BIOMATERIALS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1007/s10107-023-01943-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/3/24 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
引用次数: 0
摘要
在本文中,我们提出了一种基于任意非欧几里得规范的一阶二阶方案,并将布雷格曼距离纳入其中。它们直接引入牛顿迭代,正则化参数与当前梯度规范的平方根成正比。对于应用于复合凸优化问题的基本方案,我们确定了在函数残差和子梯度规范方面的全局收敛率为 O(k-2)。我们对目标光滑部分的主要假设是其赫西数的 Lipschitz 连续性。对于三度均匀凸函数,我们证明了全局线性收敛率;对于强凸函数,我们证明了局部超线性收敛率。我们的方法可以看作是牛顿方法立方正则化(Nesterov 和 Polyak 在 Math Program 108(1):177-205, 2006)对凸最小化问题的一种放松。这种松弛保留了凸牛顿法的收敛特性和全局复杂性,而每次迭代的辅助子问题则更为简单。我们的方法采用自适应搜索程序来选择正则化参数。我们还提出了一种收敛速率为 O(k-3)的加速方案,其中 k 是迭代计数器。
Gradient regularization of Newton method with Bregman distances.
In this paper, we propose a first second-order scheme based on arbitrary non-Euclidean norms, incorporated by Bregman distances. They are introduced directly in the Newton iterate with regularization parameter proportional to the square root of the norm of the current gradient. For the basic scheme, as applied to the composite convex optimization problem, we establish the global convergence rate of the order both in terms of the functional residual and in the norm of subgradients. Our main assumption on the smooth part of the objective is Lipschitz continuity of its Hessian. For uniformly convex functions of degree three, we justify global linear rate, and for strongly convex function we prove the local superlinear rate of convergence. Our approach can be seen as a relaxation of the Cubic Regularization of the Newton method (Nesterov and Polyak in Math Program 108(1):177-205, 2006) for convex minimization problems. This relaxation preserves the convergence properties and global complexities of the Cubic Newton in convex case, while the auxiliary subproblem at each iteration is simpler. We equip our method with adaptive search procedure for choosing the regularization parameter. We propose also an accelerated scheme with convergence rate , where k is the iteration counter.