2011 IEEE 52nd Annual Symposium on Foundations of Computer Science最新文献

筛选
英文 中文
Approximation Algorithms for Correlated Knapsacks and Non-martingale Bandits 相关背包与非鞅强盗的逼近算法
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-02-17 DOI: 10.1109/FOCS.2011.48
Anupam Gupta, Ravishankar Krishnaswamy, M. Molinaro, R. Ravi
{"title":"Approximation Algorithms for Correlated Knapsacks and Non-martingale Bandits","authors":"Anupam Gupta, Ravishankar Krishnaswamy, M. Molinaro, R. Ravi","doi":"10.1109/FOCS.2011.48","DOIUrl":"https://doi.org/10.1109/FOCS.2011.48","url":null,"abstract":"In the stochastic knapsack problem, we are given a knapsack of size B, and a set of items whose sizes and rewards are drawn from a known probability distribution. To know the actual size and reward we have to schedule the item -- when it completes, we get to know these values. The goal is to schedule the items (possibly making adaptive decisions based on the sizes seen so far) to maximize the expected total reward of items which successfully pack into the knapsack. We know constant-factor approximations when (i) the rewards and sizes are independent, and (ii) we cannot prematurely cancel items after we schedule them. What if either or both assumptions are relaxed? Related stochastic packing problems are the multi-armed bandit (and budgeted learning) problems, here one is given several arms which evolve in a specified stochastic fashion with each pull, and the goal is to (adaptively) decide which arms to pull, in order to maximize the expected reward obtained after B pulls in total. Much recent work on this problem focuses on the case when the evolution of each arm follows a martingale, i.e., when the expected reward from one pull of an arm is the same as the reward at the current state. What if the rewards do not form a martingale? In this paper, we give O(1)-approximation algorithms for the stochastic knapsack problem with correlations and/or cancellations. Extending the ideas developed here, we give O(1)-approximations for MAB problems without the martingale assumption. Indeed, we can show that previously proposed linear programming relaxations for these problems have large integrality gaps. So we propose new time-indexed LP relaxations, using a decomposition and \"gap-filling\" approach, we convert these fractional solutions to distributions over strategies, and then use the LP values and the time ordering information from these strategies to devise randomized adaptive scheduling algorithms.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131911757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 76
A Constant Factor Approximation Algorithm for Unsplittable Flow on Paths 路径上不可分割流的常因子逼近算法
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-02-17 DOI: 10.1137/120868360
P. Bonsma, J. Schulz, Andreas Wiese
{"title":"A Constant Factor Approximation Algorithm for Unsplittable Flow on Paths","authors":"P. Bonsma, J. Schulz, Andreas Wiese","doi":"10.1137/120868360","DOIUrl":"https://doi.org/10.1137/120868360","url":null,"abstract":"In this paper, we present a constant-factor approximation algorithm for the unsplittable flow problem on a path. This improves on the previous best known approximation factor of O(log n). The approximation ratio of our algorithm is 7+e for any e>0. In the unsplittable flow problem on a path, we are given a capacitated path P and n tasks, each task having a demand, a profit, and start and end vertices. The goal is to compute a maximum profit set of tasks, such that for each edge e of P, the total demand of selected tasks that use e does not exceed the capacity of e. This is a well-studied problem that occurs naturally in various settings, and therefore it has been studied under alternative names, such as resource allocation, bandwidth allocation, resource constrained scheduling, temporal knapsack and interval packing. Polynomial time constant factor approximation algorithms for the problem were previously known only under the no-bottleneck assumption (in which the maximum task demand must be no greater than the minimum edge capacity). We introduce several novel algorithmic techniques, which might be of independent interest: a framework which reduces the problem to instances with a bounded range of capacities, and a new geometrically inspired dynamic program which solves a special case of the maximum weight independent set of rectangles problem to optimality. In addition, we show that the problem is strongly NP-hard even if all edge capacities are equal and all demands are either 1, 2, or 3.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"185 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125433841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 78
Optimal Bounds for Quantum Bit Commitment 量子比特承诺的最优边界
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-02-08 DOI: 10.1109/FOCS.2011.42
A. Chailloux, Iordanis Kerenidis
{"title":"Optimal Bounds for Quantum Bit Commitment","authors":"A. Chailloux, Iordanis Kerenidis","doi":"10.1109/FOCS.2011.42","DOIUrl":"https://doi.org/10.1109/FOCS.2011.42","url":null,"abstract":"Bit commitment is a fundamental cryptographic primitive with numerous applications. Quantum information allows for bit commitment schemes in the information theoretic setting where no dishonest party can perfectly cheat. The previously best-known quantum protocol by Ambainis achieved a cheating probability of at most 3/4. On the other hand, Kitaev showed that no quantum protocol can have cheating probability less than 1sqrt{2} (his lower bound on coin flipping can be easily extended to bit commitment). Closing this gap has since been an important open question. In this paper, we provide the optimal bound for quantum bit commitment. First, we show a lower bound of approximately 0.739, improving Kitaev's lower bound. For this, we present some generic cheating strategies for Alice and Bob and conclude by proving a new relation between the trace distance and fidelity of two quantum states. Second, we present an optimal quantum bit commitment protocol which has cheating probability arbitrarily close to 0.739. More precisely, we show how to use any weak coin flipping protocol with cheating probability 1/2 + eps in order to achieve a quantum bit commitment protocol with cheating probability 0.739 + O(eps). We then use the optimal quantum weak coin flipping protocol described by Mochon. Last, in order to stress the fact that our protocol uses quantum effects beyond the weak coin flip, we show that any classical bit commitment protocol with access to perfect weak (or strong) coin flipping has cheating probability at least 3/4.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121315016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
On the Complexity of Commuting Local Hamiltonians, and Tight Conditions for Topological Order in Such Systems 这类系统中可交换局部哈密顿量的复杂性及拓扑有序的紧条件
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-02-03 DOI: 10.1109/FOCS.2011.58
D. Aharonov, Lior Eldar
{"title":"On the Complexity of Commuting Local Hamiltonians, and Tight Conditions for Topological Order in Such Systems","authors":"D. Aharonov, Lior Eldar","doi":"10.1109/FOCS.2011.58","DOIUrl":"https://doi.org/10.1109/FOCS.2011.58","url":null,"abstract":"The local Hamiltonian problem plays the equivalent role of SAT in quantum complexity theory. Understanding the complexity of the intermediate case in which the constraints are quantum but all local terms in the Hamiltonian commute, is of importance for conceptual, physical and computational complexity reasons. Bravyi and Vyalyi showed in 2003, using a clever application of the representation theory of C*-algebras, that if the terms in the Hamiltonian are all two-local, the problem is in NP, and the entanglement in the ground states is local. The general case remained open since then. In this paper we extend this result beyond the two-local case, to the case of three-qubit interactions. We then extend our results even further, and show that NP verification is possible for three-wise interaction between qutrits as well, as long as the interaction graph is planar and also \" nearly Euclidean & quot, in some well-defined sense. The proofs imply that in all such systems, the entanglement in the ground states is local. These extensions imply an intriguing sharp transition phenomenon in commuting Hamiltonian systems: the ground spaces of 3-local \" physical & quot, systems based on qubits and qutrits are diagonalizable by a basis whose entanglement is highly local, while even slightly more involved interactions (the particle dimensionality or the locality of the interaction is larger) already exhibit an important long-range entanglement property called Topological Order. Our results thus imply that Kitaev's celebrated Toric code construction is, in a well defined sense, optimal as a construction of Topological Order based on commuting Hamiltonians.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123288099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
The Minimum k-way Cut of Bounded Size is Fixed-Parameter Tractable 有界大小的最小k路切割是固定参数可处理的
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-01-24 DOI: 10.1109/FOCS.2011.53
K. Kawarabayashi, M. Thorup
{"title":"The Minimum k-way Cut of Bounded Size is Fixed-Parameter Tractable","authors":"K. Kawarabayashi, M. Thorup","doi":"10.1109/FOCS.2011.53","DOIUrl":"https://doi.org/10.1109/FOCS.2011.53","url":null,"abstract":"We consider the minimum $k$-way cut problem for unweighted undirected graphs with a size bound $s$ on the number of cut edges allowed. Thus we seek to remove as few edges as possible so as to split a graph into $k$ components, or report that this requires cutting more than $s$ edges. We show that this problem is fixed-parameter tractable (FPT) with the standard parameterization in terms of the solution size $s$. More precisely, for $s=O(1)$, we present a quadratic time algorithm. Moreover, we present a much easier linear time algorithm for planar graphs and bounded genus graphs. Our tractability result stands in contrast to known W[1] hardness of related problems. Without the size bound, Downey et al.~[2003] proved that the minimum $k$-way cut problem is W[1] hard with parameter $k$, and this is even for simple unweighted graphs. Downey et al.~asked about the status for planar graphs. We get linear time with fixed parameter $k$ for simple planar graphs since the minimum $k$-way cut of a planar graph is of size at most $6k$. More generally, we get FPT with parameter $k$ for any graph class with bounded average degree. A simple reduction shows that vertex cuts are at least as hard as edge cuts, so the minimum $k$-way vertex cut is also W[1] hard with parameter $k$. Marx [2004] proved that finding a minimum $k$-way vertex cut of size $s$ is also W[1] hard with parameter $s$. Marx asked about the FPT status with edge cuts, which we prove tractable here. We are not aware of any other cut problem where the vertex version is W[1] hard but the edge version is FPT, e.g., Marx [2004] proved that the $k$-terminal cut problem is FPT parameterized by the cut size, both for edge and vertex cuts.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"365 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122774587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
Maximizing Expected Utility for Stochastic Combinatorial Optimization Problems 随机组合优化问题的期望效用最大化
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2010-12-14 DOI: 10.1109/FOCS.2011.33
J. Li, A. Deshpande
{"title":"Maximizing Expected Utility for Stochastic Combinatorial Optimization Problems","authors":"J. Li, A. Deshpande","doi":"10.1109/FOCS.2011.33","DOIUrl":"https://doi.org/10.1109/FOCS.2011.33","url":null,"abstract":"We study the stochastic versions of a broad class of combinatorial problems where the weights of the elements in the input dataset are uncertain. The class of problems that we study includes shortest paths, minimum weight spanning trees, and minimum weight matchings over probabilistic graphs, and other combinatorial problems like knapsack. We observe that the expected value is inadequate in capturing different types of {em risk-averse} or {em risk-prone} behaviors, and instead we consider a more general objective which is to maximize the {em expected utility} of the solution for some given utility function, rather than the expected weight (expected weight becomes a special case). We show that we can obtain a polynomial time approximation algorithm with {em additive error} $epsilon$ for any $epsilon>0$, if there is a pseudopolynomial time algorithm for the {em exact} version of the problem (This is true for the problems mentioned above)and the maximum value of the utility function is bounded by a constant. Our result generalizes several prior results on stochastic shortest path, stochastic spanning tree, and stochastic knapsack. Our algorithm for utility maximization makes use of the separability of exponential utility and a technique to decompose a general utility function into exponential utility functions, which may be useful in other stochastic optimization problems.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122111362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Enumerative Lattice Algorithms in any Norm Via M-ellipsoid Coverings 通过m -椭球覆盖的任意范数的枚举格算法
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2010-11-25 DOI: 10.1109/FOCS.2011.31
D. Dadush, Chris Peikert, S. Vempala
{"title":"Enumerative Lattice Algorithms in any Norm Via M-ellipsoid Coverings","authors":"D. Dadush, Chris Peikert, S. Vempala","doi":"10.1109/FOCS.2011.31","DOIUrl":"https://doi.org/10.1109/FOCS.2011.31","url":null,"abstract":"We give a novel algorithm for enumerating lattice points in any convex body, and give applications to several classic lattice problems, including the Shortest and Closest Vector Problems (SVP and CVP, respectively) and Integer Programming (IP). Our enumeration technique relies on a classical concept from asymptotic convex geometry known as the M-ellipsoid, and uses as a crucial subroutine the recent algorithm of Micciancio and Voulgaris (STOC 2010)for lattice problems in the l2 norm. As a main technical contribution, which may be of independent interest, we build on the techniques of Klartag (Geometric and Functional Analysis, 2006) to give an expected 2^O(n)-time algorithm for computing an M-ellipsoid for any n-dimensional convex body. As applications, we give deterministic 2^O(n)-time and -space algorithms for solving exact SVP, and exact CVP when the target point is sufficiently close to the lattice, on n-dimensional lattices in any (semi-)norm given an M-ellipsoid of the unit ball. In many norms of interest, including all lp norms, an M-ellipsoid is computable in deterministic poly(n) time, in which case these algorithms are fully deterministic. Here our approach may be seen as a derandomization of the “AKS sieve”for exact SVP and CVP (Ajtai, Kumar, and Siva Kumar, STOC2001 and CCC 2002). As a further application of our SVP algorithm, we derive an expected O(f*(n))^n-time algorithm for Integer Programming, where f*(n) denotes the optimal bound in the so-called “flatnesstheorem, ” which satisfies f*(n) = O(n^(4/3) polylog(n))and is conjectured to be f*(n) = O(n). Our runtime improves upon the previous best of O(n^2)^n by Hildebrand and Koppe(2010).","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133590717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 101
Quantum Query Complexity of State Conversion 状态转换的量子查询复杂度
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2010-11-12 DOI: 10.1109/FOCS.2011.75
Troy Lee, R. Mittal, B. Reichardt, R. Spalek, M. Szegedy
{"title":"Quantum Query Complexity of State Conversion","authors":"Troy Lee, R. Mittal, B. Reichardt, R. Spalek, M. Szegedy","doi":"10.1109/FOCS.2011.75","DOIUrl":"https://doi.org/10.1109/FOCS.2011.75","url":null,"abstract":"State conversion generalizes query complexity to the problem of converting between two input-dependent quantum states by making queries to the input. We characterize the complexity of this problem by introducing a natural information-theoretic norm that extends the Schur product operator norm. The complexity of converting between two systems of states is given by the distance between them, as measured by this norm. In the special case of function evaluation, the norm is closely related to the general adversary bound, a semi-definite program that lower-bounds the number of input queries needed by a quantum algorithm to evaluate a function. We thus obtain that the general adversary bound characterizes the quantum query complexity of any function whatsoever. This generalizes and simplifies the proof of the same result in the case of boolean input and output. Also in the case of function evaluation, we show that our norm satisfies a remarkable composition property, implying that the quantum query complexity of the composition of two functions is at most the product of the query complexities of the functions, up to a constant. Finally, our result implies that discrete and continuous-time query models are equivalent in the bounded-error setting, even for the general state-conversion problem.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133172822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 150
Local Distributed Decision 局部分布式决策
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2010-11-09 DOI: 10.1109/FOCS.2011.17
P. Fraigniaud, Amos Korman, D. Peleg
{"title":"Local Distributed Decision","authors":"P. Fraigniaud, Amos Korman, D. Peleg","doi":"10.1109/FOCS.2011.17","DOIUrl":"https://doi.org/10.1109/FOCS.2011.17","url":null,"abstract":"A central theme in distributed network algorithms concerns understanding and coping with the issue of {em locality}. Despite considerable progress, research efforts in this direction have not yet resulted in a solid basis in the form of a fundamental computational complexity theory for locality. Inspired by sequential complexity theory, we focus on a complexity theory for emph{distributed decision problems}. In the context of locality, solving a decision problem requires the processors to independently inspect their local neighborhoods and then collectively decide whether a given global input instance belongs to some specified language. We consider the standard $cal{LOCAL}$ model of computation and define $LD(t)$ (for {em local decision}) as the class of decision problems that can be solved in $t$ communication rounds. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Specifically, we define the corresponding randomized class $BPLD(t,p,q)$, containing all languages for which there exists a randomized algorithm that runs in $t$ rounds, accepts correct instances with probability at least $p$ and rejects incorrect ones with probability at least $q$. We show that $p^2+q = 1$ is a threshold for the containment of $LD(t)$ in $BPLD(t,p,q)$. More precisely, we show that there exists a language that does not belong to $LD(t)$ for any $t=o(n)$ but does belong to $BPLD(0,p,q)$ for any $p,qin (0,1]$ such that $p^2+qleq 1$. On the other hand, we show that, restricted to hereditary languages, $BPLD(t,p,q) = LD(O(t))$, for any function $t$ and any $p,qin (0,1]$ such that $p^2+q&gt, 1$. In addition, we investigate the impact of non-determinism on local decision, and establish some structural results inspired by classical computational complexity theory. Specifically, we show that non-determinism does help, but that this help is limited, as there exist languages that cannot be decided non-deterministically. Perhaps surprisingly, it turns out that it is the combination of randomization with non-determinism that enables to decide emph{all} languages emph{in constant time}. Finally, we introduce the notion of local reduction, and establish some completeness results.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131587246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
Streaming Algorithms via Precision Sampling 基于精确采样的流算法
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2010-11-04 DOI: 10.1109/FOCS.2011.82
Alexandr Andoni, Robert Krauthgamer, Krzysztof Onak
{"title":"Streaming Algorithms via Precision Sampling","authors":"Alexandr Andoni, Robert Krauthgamer, Krzysztof Onak","doi":"10.1109/FOCS.2011.82","DOIUrl":"https://doi.org/10.1109/FOCS.2011.82","url":null,"abstract":"A technique introduced by Indyk and Woodruff (STOC 2005) has inspired several recent advances in data-stream algorithms. We show that a number of these results follow easily from the application of a single probabilistic method called Precision Sampling. Using this method, we obtain simple data-stream algorithms that maintain a randomized sketch of an input vector $x=(x_1,x_2,ldots,x_n)$, which is useful for the following applications:* Estimating the $F_k$-moment of $x$, for $k>2$.* Estimating the $ell_p$-norm of $x$, for $pin[1,2]$, with small update time.* Estimating cascaded norms $ell_p(ell_q)$ for all $p,q>0$.* $ell_1$ sampling, where the goal is to produce an element $i$ with probability (approximately) $|x_i|/|x|_1$. It extends to similarly defined $ell_p$-sampling, for $pin [1,2]$. For all these applications the algorithm is essentially the same: scale the vector $x$ entry-wise by a well-chosen random vector, and run a heavy-hitter estimation algorithm on the resulting vector. Our sketch is a linear function of $x$, thereby allowing general updates to the vector $x$. Precision Sampling itself addresses the problem of estimating a sum $sum_{i=1}^n a_i$ from weak estimates of each real $a_iin[0,1]$. More precisely, the estimator first chooses a desired precision$u_iin(0,1]$ for each $iin[n]$, and then it receives an estimate of every $a_i$ within additive $u_i$. Its goal is to provide a good approximation to $sum a_i$ while keeping a tab on the ``approximation cost'' $sum_i (1/u_i)$. Here we refine previous work (Andoni, Krauthgamer, and Onak, FOCS 2010)which shows that as long as $sum a_i=Omega(1)$, a good multiplicative approximation can be achieved using total precision of only $O(nlog n)$.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127652881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 101
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信