{"title":"(1 + eps)-近似稀疏恢复","authors":"Eric Price, David P. Woodruff","doi":"10.1109/FOCS.2011.92","DOIUrl":null,"url":null,"abstract":"The problem central to sparse recovery and compressive sensing is that of \\emph{stable sparse recovery}: we want a distribution $\\math cal{A}$ of matrices $A \\in \\R^{m \\times n}$ such that, for any $x \\in \\R^n$ and with probability $1 - \\delta >, 2/3$ over $A \\in \\math cal{A}$, there is an algorithm to recover $\\hat{x}$ from $Ax$ with\\begin{align} \\norm{p}{\\hat{x} - x} \\leq C \\min_{k\\text{-sparse } x'} \\norm{p}{x - x'}\\end{align}for some constant $C >, 1$ and norm $p$. The measurement complexity of this problem is well understood for constant $C >, 1$. However, in a variety of applications it is important to obtain $C = 1+\\eps$ for a small $\\eps >, 0$, and this complexity is not well understood. We resolve the dependence on $\\eps$ in the number of measurements required of a $k$-sparse recovery algorithm, up to polylogarithmic factors for the central cases of $p=1$ and $p=2$. Namely, we give new algorithms and lower bounds that show the number of measurements required is $k/\\eps^{p/2} \\textrm{polylog}(n)$. For $p=2$, our bound of $\\frac{1}{\\eps}k\\log (n/k)$ is tight up to \\emph{constant} factors. We also give matching bounds when the output is required to be $k$-sparse, in which case we achieve $k/\\eps^p \\textrm{polylog}(n)$. This shows the distinction between the complexity of sparse and non-sparse outputs is fundamental.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"47","resultStr":"{\"title\":\"(1 + eps)-Approximate Sparse Recovery\",\"authors\":\"Eric Price, David P. Woodruff\",\"doi\":\"10.1109/FOCS.2011.92\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The problem central to sparse recovery and compressive sensing is that of \\\\emph{stable sparse recovery}: we want a distribution $\\\\math cal{A}$ of matrices $A \\\\in \\\\R^{m \\\\times n}$ such that, for any $x \\\\in \\\\R^n$ and with probability $1 - \\\\delta >, 2/3$ over $A \\\\in \\\\math cal{A}$, there is an algorithm to recover $\\\\hat{x}$ from $Ax$ with\\\\begin{align} \\\\norm{p}{\\\\hat{x} - x} \\\\leq C \\\\min_{k\\\\text{-sparse } x'} \\\\norm{p}{x - x'}\\\\end{align}for some constant $C >, 1$ and norm $p$. The measurement complexity of this problem is well understood for constant $C >, 1$. However, in a variety of applications it is important to obtain $C = 1+\\\\eps$ for a small $\\\\eps >, 0$, and this complexity is not well understood. We resolve the dependence on $\\\\eps$ in the number of measurements required of a $k$-sparse recovery algorithm, up to polylogarithmic factors for the central cases of $p=1$ and $p=2$. Namely, we give new algorithms and lower bounds that show the number of measurements required is $k/\\\\eps^{p/2} \\\\textrm{polylog}(n)$. For $p=2$, our bound of $\\\\frac{1}{\\\\eps}k\\\\log (n/k)$ is tight up to \\\\emph{constant} factors. We also give matching bounds when the output is required to be $k$-sparse, in which case we achieve $k/\\\\eps^p \\\\textrm{polylog}(n)$. This shows the distinction between the complexity of sparse and non-sparse outputs is fundamental.\",\"PeriodicalId\":326048,\"journal\":{\"name\":\"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-10-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"47\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/FOCS.2011.92\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FOCS.2011.92","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The problem central to sparse recovery and compressive sensing is that of \emph{stable sparse recovery}: we want a distribution $\math cal{A}$ of matrices $A \in \R^{m \times n}$ such that, for any $x \in \R^n$ and with probability $1 - \delta >, 2/3$ over $A \in \math cal{A}$, there is an algorithm to recover $\hat{x}$ from $Ax$ with\begin{align} \norm{p}{\hat{x} - x} \leq C \min_{k\text{-sparse } x'} \norm{p}{x - x'}\end{align}for some constant $C >, 1$ and norm $p$. The measurement complexity of this problem is well understood for constant $C >, 1$. However, in a variety of applications it is important to obtain $C = 1+\eps$ for a small $\eps >, 0$, and this complexity is not well understood. We resolve the dependence on $\eps$ in the number of measurements required of a $k$-sparse recovery algorithm, up to polylogarithmic factors for the central cases of $p=1$ and $p=2$. Namely, we give new algorithms and lower bounds that show the number of measurements required is $k/\eps^{p/2} \textrm{polylog}(n)$. For $p=2$, our bound of $\frac{1}{\eps}k\log (n/k)$ is tight up to \emph{constant} factors. We also give matching bounds when the output is required to be $k$-sparse, in which case we achieve $k/\eps^p \textrm{polylog}(n)$. This shows the distinction between the complexity of sparse and non-sparse outputs is fundamental.