基于L1回归的对抗稀疏噪声压缩感知

Sushrut Karmalkar, Eric Price
{"title":"基于L1回归的对抗稀疏噪声压缩感知","authors":"Sushrut Karmalkar, Eric Price","doi":"10.4230/OASIcs.SOSA.2019.19","DOIUrl":null,"url":null,"abstract":"We present a simple and effective algorithm for the problem of \\emph{sparse robust linear regression}. In this problem, one would like to estimate a sparse vector $w^* \\in \\mathbb{R}^n$ from linear measurements corrupted by sparse noise that can arbitrarily change an adversarially chosen $\\eta$ fraction of measured responses $y$, as well as introduce bounded norm noise to the responses. For Gaussian measurements, we show that a simple algorithm based on L1 regression can successfully estimate $w^*$ for any $\\eta < \\eta_0 \\approx 0.239$, and that this threshold is tight for the algorithm. The number of measurements required by the algorithm is $O(k \\log \\frac{n}{k})$ for $k$-sparse estimation, which is within constant factors of the number needed without any sparse noise. Of the three properties we show---the ability to estimate sparse, as well as dense, $w^*$; the tolerance of a large constant fraction of outliers; and tolerance of adversarial rather than distributional (e.g., Gaussian) dense noise---to the best of our knowledge, no previous result achieved more than two.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"91 2 1","pages":"19:1-19:19"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"31","resultStr":"{\"title\":\"Compressed Sensing with Adversarial Sparse Noise via L1 Regression\",\"authors\":\"Sushrut Karmalkar, Eric Price\",\"doi\":\"10.4230/OASIcs.SOSA.2019.19\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present a simple and effective algorithm for the problem of \\\\emph{sparse robust linear regression}. In this problem, one would like to estimate a sparse vector $w^* \\\\in \\\\mathbb{R}^n$ from linear measurements corrupted by sparse noise that can arbitrarily change an adversarially chosen $\\\\eta$ fraction of measured responses $y$, as well as introduce bounded norm noise to the responses. For Gaussian measurements, we show that a simple algorithm based on L1 regression can successfully estimate $w^*$ for any $\\\\eta < \\\\eta_0 \\\\approx 0.239$, and that this threshold is tight for the algorithm. The number of measurements required by the algorithm is $O(k \\\\log \\\\frac{n}{k})$ for $k$-sparse estimation, which is within constant factors of the number needed without any sparse noise. Of the three properties we show---the ability to estimate sparse, as well as dense, $w^*$; the tolerance of a large constant fraction of outliers; and tolerance of adversarial rather than distributional (e.g., Gaussian) dense noise---to the best of our knowledge, no previous result achieved more than two.\",\"PeriodicalId\":93491,\"journal\":{\"name\":\"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)\",\"volume\":\"91 2 1\",\"pages\":\"19:1-19:19\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"31\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.4230/OASIcs.SOSA.2019.19\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4230/OASIcs.SOSA.2019.19","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 31

摘要

针对\emph{稀疏鲁棒线性回归}问题,提出了一种简单有效的算法。在这个问题中,人们想要从被稀疏噪声破坏的线性测量中估计一个稀疏向量$w^* \in \mathbb{R}^n$,稀疏噪声可以任意改变一个对抗选择的$\eta$部分测量响应$y$,并向响应引入有界范数噪声。对于高斯测量,我们证明了基于L1回归的简单算法可以成功地估计任意$\eta < \eta_0 \approx 0.239$的$w^*$,并且该阈值对于算法来说是严格的。对于$k$ -稀疏估计,算法所需的测量次数为$O(k \log \frac{n}{k})$,在不存在稀疏噪声的情况下,在所需次数的常数因子范围内。在我们展示的三个属性中——估计稀疏和密集的能力,$w^*$;容忍度:对异常值的较大常数分数的容忍度;以及对抗性而非分布性(例如,高斯)密集噪声的容忍度——据我们所知,之前的结果都没有达到两个以上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Compressed Sensing with Adversarial Sparse Noise via L1 Regression
We present a simple and effective algorithm for the problem of \emph{sparse robust linear regression}. In this problem, one would like to estimate a sparse vector $w^* \in \mathbb{R}^n$ from linear measurements corrupted by sparse noise that can arbitrarily change an adversarially chosen $\eta$ fraction of measured responses $y$, as well as introduce bounded norm noise to the responses. For Gaussian measurements, we show that a simple algorithm based on L1 regression can successfully estimate $w^*$ for any $\eta < \eta_0 \approx 0.239$, and that this threshold is tight for the algorithm. The number of measurements required by the algorithm is $O(k \log \frac{n}{k})$ for $k$-sparse estimation, which is within constant factors of the number needed without any sparse noise. Of the three properties we show---the ability to estimate sparse, as well as dense, $w^*$; the tolerance of a large constant fraction of outliers; and tolerance of adversarial rather than distributional (e.g., Gaussian) dense noise---to the best of our knowledge, no previous result achieved more than two.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信