{"title":"二次正则化线性规划的定量收敛性","authors":"Alberto González-Sanz, Marcel Nutz","doi":"10.1007/s00245-025-10267-1","DOIUrl":null,"url":null,"abstract":"<div><p>Linear programs with quadratic (“ridge”) regularization are of recent interest in optimal transport: unlike entropic regularization, the squared-norm penalty gives rise to sparse approximations of optimal transport couplings. More broadly, quadratic regularization is used in overparametrized learning problems to single out a particular solution. It is well known that the solution of a quadratically regularized linear program over any polytope converges stationarily to the minimal-norm solution of the linear program when the regularization parameter tends to zero. However, that result is merely qualitative. Our main result quantifies the convergence by specifying the exact threshold for the regularization parameter, after which the regularized solution also solves the linear program. Moreover, we bound the suboptimality of the regularized solution before the threshold. These results are complemented by a convergence rate for the regime of large regularization. We apply our general results to the setting of optimal transport, where we shed light on how the threshold and suboptimality depend on the number of data points.</p></div>","PeriodicalId":55566,"journal":{"name":"Applied Mathematics and Optimization","volume":"91 3","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2025-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Quantitative Convergence of Quadratically Regularized Linear Programs\",\"authors\":\"Alberto González-Sanz, Marcel Nutz\",\"doi\":\"10.1007/s00245-025-10267-1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Linear programs with quadratic (“ridge”) regularization are of recent interest in optimal transport: unlike entropic regularization, the squared-norm penalty gives rise to sparse approximations of optimal transport couplings. More broadly, quadratic regularization is used in overparametrized learning problems to single out a particular solution. It is well known that the solution of a quadratically regularized linear program over any polytope converges stationarily to the minimal-norm solution of the linear program when the regularization parameter tends to zero. However, that result is merely qualitative. Our main result quantifies the convergence by specifying the exact threshold for the regularization parameter, after which the regularized solution also solves the linear program. Moreover, we bound the suboptimality of the regularized solution before the threshold. These results are complemented by a convergence rate for the regime of large regularization. We apply our general results to the setting of optimal transport, where we shed light on how the threshold and suboptimality depend on the number of data points.</p></div>\",\"PeriodicalId\":55566,\"journal\":{\"name\":\"Applied Mathematics and Optimization\",\"volume\":\"91 3\",\"pages\":\"\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2025-05-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Mathematics and Optimization\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s00245-025-10267-1\",\"RegionNum\":2,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Mathematics and Optimization","FirstCategoryId":"100","ListUrlMain":"https://link.springer.com/article/10.1007/s00245-025-10267-1","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
Quantitative Convergence of Quadratically Regularized Linear Programs
Linear programs with quadratic (“ridge”) regularization are of recent interest in optimal transport: unlike entropic regularization, the squared-norm penalty gives rise to sparse approximations of optimal transport couplings. More broadly, quadratic regularization is used in overparametrized learning problems to single out a particular solution. It is well known that the solution of a quadratically regularized linear program over any polytope converges stationarily to the minimal-norm solution of the linear program when the regularization parameter tends to zero. However, that result is merely qualitative. Our main result quantifies the convergence by specifying the exact threshold for the regularization parameter, after which the regularized solution also solves the linear program. Moreover, we bound the suboptimality of the regularized solution before the threshold. These results are complemented by a convergence rate for the regime of large regularization. We apply our general results to the setting of optimal transport, where we shed light on how the threshold and suboptimality depend on the number of data points.
期刊介绍:
The Applied Mathematics and Optimization Journal covers a broad range of mathematical methods in particular those that bridge with optimization and have some connection with applications. Core topics include calculus of variations, partial differential equations, stochastic control, optimization of deterministic or stochastic systems in discrete or continuous time, homogenization, control theory, mean field games, dynamic games and optimal transport. Algorithmic, data analytic, machine learning and numerical methods which support the modeling and analysis of optimization problems are encouraged. Of great interest are papers which show some novel idea in either the theory or model which include some connection with potential applications in science and engineering.