(1 + eps)-近似稀疏恢复

Eric Price, David P. Woodruff
{"title":"(1 + eps)-近似稀疏恢复","authors":"Eric Price, David P. Woodruff","doi":"10.1109/FOCS.2011.92","DOIUrl":null,"url":null,"abstract":"The problem central to sparse recovery and compressive sensing is that of \\emph{stable sparse recovery}: we want a distribution $\\math cal{A}$ of matrices $A \\in \\R^{m \\times n}$ such that, for any $x \\in \\R^n$ and with probability $1 - \\delta &gt, 2/3$ over $A \\in \\math cal{A}$, there is an algorithm to recover $\\hat{x}$ from $Ax$ with\\begin{align} \\norm{p}{\\hat{x} - x} \\leq C \\min_{k\\text{-sparse } x'} \\norm{p}{x - x'}\\end{align}for some constant $C &gt, 1$ and norm $p$. The measurement complexity of this problem is well understood for constant $C &gt, 1$. However, in a variety of applications it is important to obtain $C = 1+\\eps$ for a small $\\eps &gt, 0$, and this complexity is not well understood. We resolve the dependence on $\\eps$ in the number of measurements required of a $k$-sparse recovery algorithm, up to polylogarithmic factors for the central cases of $p=1$ and $p=2$. Namely, we give new algorithms and lower bounds that show the number of measurements required is $k/\\eps^{p/2} \\textrm{polylog}(n)$. For $p=2$, our bound of $\\frac{1}{\\eps}k\\log (n/k)$ is tight up to \\emph{constant} factors. We also give matching bounds when the output is required to be $k$-sparse, in which case we achieve $k/\\eps^p \\textrm{polylog}(n)$. This shows the distinction between the complexity of sparse and non-sparse outputs is fundamental.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2011-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"47","resultStr":"{\"title\":\"(1 + eps)-Approximate Sparse Recovery\",\"authors\":\"Eric Price, David P. Woodruff\",\"doi\":\"10.1109/FOCS.2011.92\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The problem central to sparse recovery and compressive sensing is that of \\\\emph{stable sparse recovery}: we want a distribution $\\\\math cal{A}$ of matrices $A \\\\in \\\\R^{m \\\\times n}$ such that, for any $x \\\\in \\\\R^n$ and with probability $1 - \\\\delta &gt, 2/3$ over $A \\\\in \\\\math cal{A}$, there is an algorithm to recover $\\\\hat{x}$ from $Ax$ with\\\\begin{align} \\\\norm{p}{\\\\hat{x} - x} \\\\leq C \\\\min_{k\\\\text{-sparse } x'} \\\\norm{p}{x - x'}\\\\end{align}for some constant $C &gt, 1$ and norm $p$. The measurement complexity of this problem is well understood for constant $C &gt, 1$. However, in a variety of applications it is important to obtain $C = 1+\\\\eps$ for a small $\\\\eps &gt, 0$, and this complexity is not well understood. We resolve the dependence on $\\\\eps$ in the number of measurements required of a $k$-sparse recovery algorithm, up to polylogarithmic factors for the central cases of $p=1$ and $p=2$. Namely, we give new algorithms and lower bounds that show the number of measurements required is $k/\\\\eps^{p/2} \\\\textrm{polylog}(n)$. For $p=2$, our bound of $\\\\frac{1}{\\\\eps}k\\\\log (n/k)$ is tight up to \\\\emph{constant} factors. We also give matching bounds when the output is required to be $k$-sparse, in which case we achieve $k/\\\\eps^p \\\\textrm{polylog}(n)$. This shows the distinction between the complexity of sparse and non-sparse outputs is fundamental.\",\"PeriodicalId\":326048,\"journal\":{\"name\":\"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-10-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"47\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/FOCS.2011.92\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FOCS.2011.92","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 47

摘要

稀疏恢复和压缩感知的核心问题是\emph{稳定稀疏恢复}:我们想要一个矩阵$A \in \R^{m \times n}$的分布$\math cal{A}$,这样,对于任何$x \in \R^n$和概率$1 - \delta >, 2/3$超过$A \in \math cal{A}$,有一种算法可以从$Ax$和\begin{align} \norm{p}{\hat{x} - x} \leq C \min_{k\text{-sparse } x'} \norm{p}{x - x'}\end{align}对某些常数$C >, 1$和范数$p$恢复$\hat{x}$。对于常数$C >, 1$,这个问题的测量复杂性是很容易理解的。然而,在各种应用程序中,为一个小的$\eps >, 0$获取$C = 1+\eps$是很重要的,而且这种复杂性还没有得到很好的理解。我们解决了$k$ -稀疏恢复算法所需的测量数量对$\eps$的依赖,直至$p=1$和$p=2$中心情况的多对数因子。也就是说,我们给出了新的算法和下界,表明所需的测量次数是$k/\eps^{p/2} \textrm{polylog}(n)$。对于$p=2$, $\frac{1}{\eps}k\log (n/k)$的边界被\emph{常数}因子所限制。当输出要求为$k$ -sparse时,我们也给出了匹配边界,在这种情况下,我们实现了$k/\eps^p \textrm{polylog}(n)$。这表明,稀疏输出和非稀疏输出的复杂性之间的区别是根本的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
(1 + eps)-Approximate Sparse Recovery
The problem central to sparse recovery and compressive sensing is that of \emph{stable sparse recovery}: we want a distribution $\math cal{A}$ of matrices $A \in \R^{m \times n}$ such that, for any $x \in \R^n$ and with probability $1 - \delta >, 2/3$ over $A \in \math cal{A}$, there is an algorithm to recover $\hat{x}$ from $Ax$ with\begin{align} \norm{p}{\hat{x} - x} \leq C \min_{k\text{-sparse } x'} \norm{p}{x - x'}\end{align}for some constant $C >, 1$ and norm $p$. The measurement complexity of this problem is well understood for constant $C >, 1$. However, in a variety of applications it is important to obtain $C = 1+\eps$ for a small $\eps >, 0$, and this complexity is not well understood. We resolve the dependence on $\eps$ in the number of measurements required of a $k$-sparse recovery algorithm, up to polylogarithmic factors for the central cases of $p=1$ and $p=2$. Namely, we give new algorithms and lower bounds that show the number of measurements required is $k/\eps^{p/2} \textrm{polylog}(n)$. For $p=2$, our bound of $\frac{1}{\eps}k\log (n/k)$ is tight up to \emph{constant} factors. We also give matching bounds when the output is required to be $k$-sparse, in which case we achieve $k/\eps^p \textrm{polylog}(n)$. This shows the distinction between the complexity of sparse and non-sparse outputs is fundamental.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信