WARPd: A Linearly Convergent First-Order Primal-Dual Algorithm for Inverse Problems with Approximate Sharpness Conditions

Matthew J. Colbrook
{"title":"WARPd: A Linearly Convergent First-Order Primal-Dual Algorithm for Inverse Problems with Approximate Sharpness Conditions","authors":"Matthew J. Colbrook","doi":"10.1137/21m1455000","DOIUrl":null,"url":null,"abstract":"Sharpness conditions directly control the recovery performance of restart schemes for first-order optimization methods without the need for restrictive assumptions such as strong convexity. However, they are challenging to apply in the presence of noise or approximate model classes (e.g., approximate sparsity). We provide a first-order method: weighted, accelerated, and restarted primal-dual (WARPd), based on primal-dual iterations and a novel restart-reweight scheme. Under a generic approximate sharpness condition, WARPd achieves stable linear convergence to the desired vector. Many problems of interest fit into this framework. For example, we analyze sparse recovery in compressed sensing, low-rank matrix recovery, matrix completion, TV regularization, minimization of ∥Bx∥l1 under constraints (l-analysis problems for general B), and mixed regularization problems. We show how several quantities controlling recovery performance also provide explicit approximate sharpness constants. Numerical experiments show that WARPd compares favorably with specialized state-of-the-art methods and is ideally suited for solving large-scale problems. We also present a noise-blind variant based on a square-root LASSO decoder. Finally, we show how to unroll WARPd as neural networks. This approximation theory result provides lower bounds for stable and accurate neural networks for inverse problems and sheds light on architecture choices. Code and a gallery of examples are available online as a MATLAB package.","PeriodicalId":185319,"journal":{"name":"SIAM J. Imaging Sci.","volume":"315 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM J. Imaging Sci.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1137/21m1455000","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Sharpness conditions directly control the recovery performance of restart schemes for first-order optimization methods without the need for restrictive assumptions such as strong convexity. However, they are challenging to apply in the presence of noise or approximate model classes (e.g., approximate sparsity). We provide a first-order method: weighted, accelerated, and restarted primal-dual (WARPd), based on primal-dual iterations and a novel restart-reweight scheme. Under a generic approximate sharpness condition, WARPd achieves stable linear convergence to the desired vector. Many problems of interest fit into this framework. For example, we analyze sparse recovery in compressed sensing, low-rank matrix recovery, matrix completion, TV regularization, minimization of ∥Bx∥l1 under constraints (l-analysis problems for general B), and mixed regularization problems. We show how several quantities controlling recovery performance also provide explicit approximate sharpness constants. Numerical experiments show that WARPd compares favorably with specialized state-of-the-art methods and is ideally suited for solving large-scale problems. We also present a noise-blind variant based on a square-root LASSO decoder. Finally, we show how to unroll WARPd as neural networks. This approximation theory result provides lower bounds for stable and accurate neural networks for inverse problems and sheds light on architecture choices. Code and a gallery of examples are available online as a MATLAB package.
近似锐度条件下逆问题的线性收敛一阶原对偶算法
锐度条件直接控制一阶优化方法重启方案的恢复性能,不需要强凸性等限制性假设。然而,在存在噪声或近似模型类(例如,近似稀疏性)的情况下应用它们是具有挑战性的。我们提出了一种一阶方法:加权、加速和重新启动原始对偶(WARPd),基于原始对偶迭代和一种新的重新启动重权方案。在一般近似锐度条件下,WARPd实现了对期望向量的稳定线性收敛。许多有趣的问题都符合这个框架。例如,我们分析了压缩感知中的稀疏恢复、低秩矩阵恢复、矩阵补全、TV正则化、约束条件下∥Bx∥l1的最小化(一般B的l-分析问题)和混合正则化问题。我们展示了几个控制恢复性能的量也提供了显式的近似锐度常数。数值实验表明,WARPd方法优于专门的最先进的方法,非常适合于解决大规模问题。我们还提出了一种基于平方根LASSO解码器的噪声盲变体。最后,我们将展示如何将WARPd展开为神经网络。这一近似理论结果为稳定、精确的神经网络求解逆问题提供了下界,并为结构选择提供了启示。代码和示例库可作为MATLAB包在线获取。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信