Alfonso Landeros, Oscar Hernan Madrid Padilla, Hua Zhou, Kenneth Lange
{"title":"约束优化近距离法的扩展。","authors":"Alfonso Landeros, Oscar Hernan Madrid Padilla, Hua Zhou, Kenneth Lange","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>The current paper studies the problem of minimizing a loss <i>f</i>(<b><i>x</i></b>) subject to constraints of the form <b><i>Dx</i></b> ∈ <i>S</i>, where <i>S</i> is a closed set, convex or not, and <i><b>D</b></i> is a matrix that fuses parameters. Fusion constraints can capture smoothness, sparsity, or more general constraint patterns. To tackle this generic class of problems, we combine the Beltrami-Courant penalty method of optimization with the proximal distance principle. The latter is driven by minimization of penalized objectives <math><mrow><mi>f</mi><mo>(</mo><mstyle><mi>x</mi></mstyle><mo>)</mo><mo>+</mo><mfrac><mi>ρ</mi><mn>2</mn></mfrac><mtext>dist</mtext><msup><mrow><mo>(</mo><mstyle><mi>D</mi><mi>x</mi></mstyle><mo>,</mo><mi>S</mi><mo>)</mo></mrow><mn>2</mn></msup></mrow></math> involving large tuning constants <i>ρ</i> and the squared Euclidean distance of <b><i>Dx</i></b> from <i>S</i>. The next iterate <b><i>x</i></b><sub><i>n</i>+1</sub> of the corresponding proximal distance algorithm is constructed from the current iterate <b><i>x</i></b><sub><i>n</i></sub> by minimizing the majorizing surrogate function <math><mrow><mi>f</mi><mo>(</mo><mstyle><mi>x</mi></mstyle><mo>)</mo><mo>+</mo><mfrac><mi>ρ</mi><mn>2</mn></mfrac><msup><mrow><mrow><mo>‖</mo><mrow><mstyle><mi>D</mi><mi>x</mi></mstyle><mo>-</mo><msub><mi>𝒫</mi><mi>S</mi></msub><mrow><mo>(</mo><mrow><mstyle><mi>D</mi></mstyle><msub><mstyle><mi>x</mi></mstyle><mi>n</mi></msub></mrow><mo>)</mo></mrow></mrow><mo>‖</mo></mrow></mrow><mn>2</mn></msup></mrow></math>. For fixed <i>ρ</i> and a subanalytic loss <i>f</i>(<b><i>x</i></b>) and a subanalytic constraint set <i>S</i>, we prove convergence to a stationary point. Under stronger assumptions, we provide convergence rates and demonstrate linear local convergence. We also construct a steepest descent (SD) variant to avoid costly linear system solves. To benchmark our algorithms, we compare their results to those delivered by the alternating direction method of multipliers (ADMM). Our extensive numerical tests include problems on metric projection, convex regression, convex clustering, total variation image denoising, and projection of a matrix to a good condition number. These experiments demonstrate the superior speed and acceptable accuracy of our steepest variant on high-dimensional problems. Julia code to replicate all of our experiments can be found at https://github.com/alanderos91/ProximalDistanceAlgorithms.jl.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10191389/pdf/","citationCount":"0","resultStr":"{\"title\":\"Extensions to the Proximal Distance Method of Constrained Optimization.\",\"authors\":\"Alfonso Landeros, Oscar Hernan Madrid Padilla, Hua Zhou, Kenneth Lange\",\"doi\":\"\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The current paper studies the problem of minimizing a loss <i>f</i>(<b><i>x</i></b>) subject to constraints of the form <b><i>Dx</i></b> ∈ <i>S</i>, where <i>S</i> is a closed set, convex or not, and <i><b>D</b></i> is a matrix that fuses parameters. Fusion constraints can capture smoothness, sparsity, or more general constraint patterns. To tackle this generic class of problems, we combine the Beltrami-Courant penalty method of optimization with the proximal distance principle. The latter is driven by minimization of penalized objectives <math><mrow><mi>f</mi><mo>(</mo><mstyle><mi>x</mi></mstyle><mo>)</mo><mo>+</mo><mfrac><mi>ρ</mi><mn>2</mn></mfrac><mtext>dist</mtext><msup><mrow><mo>(</mo><mstyle><mi>D</mi><mi>x</mi></mstyle><mo>,</mo><mi>S</mi><mo>)</mo></mrow><mn>2</mn></msup></mrow></math> involving large tuning constants <i>ρ</i> and the squared Euclidean distance of <b><i>Dx</i></b> from <i>S</i>. The next iterate <b><i>x</i></b><sub><i>n</i>+1</sub> of the corresponding proximal distance algorithm is constructed from the current iterate <b><i>x</i></b><sub><i>n</i></sub> by minimizing the majorizing surrogate function <math><mrow><mi>f</mi><mo>(</mo><mstyle><mi>x</mi></mstyle><mo>)</mo><mo>+</mo><mfrac><mi>ρ</mi><mn>2</mn></mfrac><msup><mrow><mrow><mo>‖</mo><mrow><mstyle><mi>D</mi><mi>x</mi></mstyle><mo>-</mo><msub><mi>𝒫</mi><mi>S</mi></msub><mrow><mo>(</mo><mrow><mstyle><mi>D</mi></mstyle><msub><mstyle><mi>x</mi></mstyle><mi>n</mi></msub></mrow><mo>)</mo></mrow></mrow><mo>‖</mo></mrow></mrow><mn>2</mn></msup></mrow></math>. For fixed <i>ρ</i> and a subanalytic loss <i>f</i>(<b><i>x</i></b>) and a subanalytic constraint set <i>S</i>, we prove convergence to a stationary point. Under stronger assumptions, we provide convergence rates and demonstrate linear local convergence. We also construct a steepest descent (SD) variant to avoid costly linear system solves. To benchmark our algorithms, we compare their results to those delivered by the alternating direction method of multipliers (ADMM). Our extensive numerical tests include problems on metric projection, convex regression, convex clustering, total variation image denoising, and projection of a matrix to a good condition number. These experiments demonstrate the superior speed and acceptable accuracy of our steepest variant on high-dimensional problems. Julia code to replicate all of our experiments can be found at https://github.com/alanderos91/ProximalDistanceAlgorithms.jl.</p>\",\"PeriodicalId\":50161,\"journal\":{\"name\":\"Journal of Machine Learning Research\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10191389/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Machine Learning Research\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Machine Learning Research","FirstCategoryId":"94","ListUrlMain":"","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Extensions to the Proximal Distance Method of Constrained Optimization.
The current paper studies the problem of minimizing a loss f(x) subject to constraints of the form Dx ∈ S, where S is a closed set, convex or not, and D is a matrix that fuses parameters. Fusion constraints can capture smoothness, sparsity, or more general constraint patterns. To tackle this generic class of problems, we combine the Beltrami-Courant penalty method of optimization with the proximal distance principle. The latter is driven by minimization of penalized objectives involving large tuning constants ρ and the squared Euclidean distance of Dx from S. The next iterate xn+1 of the corresponding proximal distance algorithm is constructed from the current iterate xn by minimizing the majorizing surrogate function . For fixed ρ and a subanalytic loss f(x) and a subanalytic constraint set S, we prove convergence to a stationary point. Under stronger assumptions, we provide convergence rates and demonstrate linear local convergence. We also construct a steepest descent (SD) variant to avoid costly linear system solves. To benchmark our algorithms, we compare their results to those delivered by the alternating direction method of multipliers (ADMM). Our extensive numerical tests include problems on metric projection, convex regression, convex clustering, total variation image denoising, and projection of a matrix to a good condition number. These experiments demonstrate the superior speed and acceptable accuracy of our steepest variant on high-dimensional problems. Julia code to replicate all of our experiments can be found at https://github.com/alanderos91/ProximalDistanceAlgorithms.jl.
期刊介绍:
The Journal of Machine Learning Research (JMLR) provides an international forum for the electronic and paper publication of high-quality scholarly articles in all areas of machine learning. All published papers are freely available online.
JMLR has a commitment to rigorous yet rapid reviewing.
JMLR seeks previously unpublished papers on machine learning that contain:
new principled algorithms with sound empirical validation, and with justification of theoretical, psychological, or biological nature;
experimental and/or theoretical studies yielding new insight into the design and behavior of learning in intelligent systems;
accounts of applications of existing techniques that shed light on the strengths and weaknesses of the methods;
formalization of new learning tasks (e.g., in the context of new applications) and of methods for assessing performance on those tasks;
development of new analytical frameworks that advance theoretical studies of practical learning methods;
computational models of data from natural learning systems at the behavioral or neural level; or extremely well-written surveys of existing work.