Streaming Algorithms via Precision Sampling

Alexandr Andoni, Robert Krauthgamer, Krzysztof Onak
{"title":"Streaming Algorithms via Precision Sampling","authors":"Alexandr Andoni, Robert Krauthgamer, Krzysztof Onak","doi":"10.1109/FOCS.2011.82","DOIUrl":null,"url":null,"abstract":"A technique introduced by Indyk and Woodruff (STOC 2005) has inspired several recent advances in data-stream algorithms. We show that a number of these results follow easily from the application of a single probabilistic method called Precision Sampling. Using this method, we obtain simple data-stream algorithms that maintain a randomized sketch of an input vector $x=(x_1,x_2,\\ldots,x_n)$, which is useful for the following applications:* Estimating the $F_k$-moment of $x$, for $k>2$.* Estimating the $\\ell_p$-norm of $x$, for $p\\in[1,2]$, with small update time.* Estimating cascaded norms $\\ell_p(\\ell_q)$ for all $p,q>0$.* $\\ell_1$ sampling, where the goal is to produce an element $i$ with probability (approximately) $|x_i|/\\|x\\|_1$. It extends to similarly defined $\\ell_p$-sampling, for $p\\in [1,2]$. For all these applications the algorithm is essentially the same: scale the vector $x$ entry-wise by a well-chosen random vector, and run a heavy-hitter estimation algorithm on the resulting vector. Our sketch is a linear function of $x$, thereby allowing general updates to the vector $x$. Precision Sampling itself addresses the problem of estimating a sum $\\sum_{i=1}^n a_i$ from weak estimates of each real $a_i\\in[0,1]$. More precisely, the estimator first chooses a desired precision$u_i\\in(0,1]$ for each $i\\in[n]$, and then it receives an estimate of every $a_i$ within additive $u_i$. Its goal is to provide a good approximation to $\\sum a_i$ while keeping a tab on the ``approximation cost'' $\\sum_i (1/u_i)$. Here we refine previous work (Andoni, Krauthgamer, and Onak, FOCS 2010)which shows that as long as $\\sum a_i=\\Omega(1)$, a good multiplicative approximation can be achieved using total precision of only $O(n\\log n)$.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"101","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FOCS.2011.82","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 101

Abstract

A technique introduced by Indyk and Woodruff (STOC 2005) has inspired several recent advances in data-stream algorithms. We show that a number of these results follow easily from the application of a single probabilistic method called Precision Sampling. Using this method, we obtain simple data-stream algorithms that maintain a randomized sketch of an input vector $x=(x_1,x_2,\ldots,x_n)$, which is useful for the following applications:* Estimating the $F_k$-moment of $x$, for $k>2$.* Estimating the $\ell_p$-norm of $x$, for $p\in[1,2]$, with small update time.* Estimating cascaded norms $\ell_p(\ell_q)$ for all $p,q>0$.* $\ell_1$ sampling, where the goal is to produce an element $i$ with probability (approximately) $|x_i|/\|x\|_1$. It extends to similarly defined $\ell_p$-sampling, for $p\in [1,2]$. For all these applications the algorithm is essentially the same: scale the vector $x$ entry-wise by a well-chosen random vector, and run a heavy-hitter estimation algorithm on the resulting vector. Our sketch is a linear function of $x$, thereby allowing general updates to the vector $x$. Precision Sampling itself addresses the problem of estimating a sum $\sum_{i=1}^n a_i$ from weak estimates of each real $a_i\in[0,1]$. More precisely, the estimator first chooses a desired precision$u_i\in(0,1]$ for each $i\in[n]$, and then it receives an estimate of every $a_i$ within additive $u_i$. Its goal is to provide a good approximation to $\sum a_i$ while keeping a tab on the ``approximation cost'' $\sum_i (1/u_i)$. Here we refine previous work (Andoni, Krauthgamer, and Onak, FOCS 2010)which shows that as long as $\sum a_i=\Omega(1)$, a good multiplicative approximation can be achieved using total precision of only $O(n\log n)$.
基于精确采样的流算法
Indyk和Woodruff (STOC 2005)引入的一项技术激发了数据流算法的几个最新进展。我们表明,许多这样的结果很容易遵循单一的概率方法称为精确抽样的应用。使用这种方法,我们获得了简单的数据流算法,该算法维护输入向量$x=(x_1,x_2,\ldots,x_n)$的随机草图,这对于以下应用很有用:*估计$x$的$F_k$ -矩,对于$k>2$ .*估计$x$的$\ell_p$ -范数,对于$p\in[1,2]$,更新时间小。*估计所有$p,q>0$ .* $\ell_1$抽样的级联规范$\ell_p(\ell_q)$,其中目标是产生一个元素$i$的概率(近似)$|x_i|/\|x\|_1$。它扩展到类似定义的$\ell_p$ -sampling,用于$p\in [1,2]$。对于所有这些应用程序,算法本质上是相同的:通过一个精心选择的随机向量按入口方向缩放向量$x$,并对结果向量运行一个重量级的估计算法。我们的草图是$x$的线性函数,因此允许对向量$x$进行一般更新。精确抽样本身解决了从每个真实$a_i\in[0,1]$的弱估计中估计和$\sum_{i=1}^n a_i$的问题。更准确地说,估计器首先为每个$i\in[n]$选择一个所需的精度$u_i\in(0,1]$,然后它接收对附加的$u_i$中的每个$a_i$的估计。它的目标是提供一个良好的近似$\sum a_i$,同时在“近似成本”$\sum_i (1/u_i)$上保留一个选项卡。在这里,我们改进了以前的工作(Andoni, Krauthgamer, and Onak, FOCS 2010),这表明只要$\sum a_i=\Omega(1)$,使用$O(n\log n)$的总精度就可以实现良好的乘法近似。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信