{"title":"A Lower Bound for Dynamic Approximate Membership Data Structures","authors":"Shachar Lovett, E. Porat","doi":"10.1109/FOCS.2010.81","DOIUrl":null,"url":null,"abstract":"An approximate membership data structure is a randomized data structure for representing a set which supports membership queries. It allows for a small false positive error rate but has no false negative errors. Such data structures were first introduced by Bloom in the 1970's, and have since had numerous applications, mainly in distributed systems, database systems, and networks. The algorithm of Bloom is quite effective: it can store a set $S$ of size $n$ by using only $\\approx 1.44 n \\log_2(1/\\epsilon)$ bits while having false positive error $\\epsilon$. This is within a constant factor of the entropy lower bound of $n \\log_2(1/\\epsilon)$ for storing such sets. Closing this gap is an important open problem, as Bloom filters are widely used is situations were storage is at a premium. Bloom filters have another property: they are dynamic. That is, they support the iterative insertions of up to $n$ elements. In fact, if one removes this requirement, there exist static data structures which receive the entire set at once and can almost achieve the entropy lower bound, they require only $n \\log_2(1/\\epsilon)(1+o(1))$ bits. Our main result is a new lower bound for the memory requirements of any dynamic approximate membership data structure. We show that for any constant $\\epsilon>0$, any such data structure which achieves false positive error rate of $\\epsilon$ must use at least $C(\\epsilon) \\cdot n \\log_2(1/\\epsilon)$ memory bits, where $C(\\epsilon)>1$ depends only on $\\epsilon$. This shows that the entropy lower bound cannot be achieved by dynamic data structures for any constant error rate. In fact, our lower bound holds even in the setting where the insertion and query algorithms may use shared randomness, and where they are only required to perform well on average.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"23","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FOCS.2010.81","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 23
Abstract
An approximate membership data structure is a randomized data structure for representing a set which supports membership queries. It allows for a small false positive error rate but has no false negative errors. Such data structures were first introduced by Bloom in the 1970's, and have since had numerous applications, mainly in distributed systems, database systems, and networks. The algorithm of Bloom is quite effective: it can store a set $S$ of size $n$ by using only $\approx 1.44 n \log_2(1/\epsilon)$ bits while having false positive error $\epsilon$. This is within a constant factor of the entropy lower bound of $n \log_2(1/\epsilon)$ for storing such sets. Closing this gap is an important open problem, as Bloom filters are widely used is situations were storage is at a premium. Bloom filters have another property: they are dynamic. That is, they support the iterative insertions of up to $n$ elements. In fact, if one removes this requirement, there exist static data structures which receive the entire set at once and can almost achieve the entropy lower bound, they require only $n \log_2(1/\epsilon)(1+o(1))$ bits. Our main result is a new lower bound for the memory requirements of any dynamic approximate membership data structure. We show that for any constant $\epsilon>0$, any such data structure which achieves false positive error rate of $\epsilon$ must use at least $C(\epsilon) \cdot n \log_2(1/\epsilon)$ memory bits, where $C(\epsilon)>1$ depends only on $\epsilon$. This shows that the entropy lower bound cannot be achieved by dynamic data structures for any constant error rate. In fact, our lower bound holds even in the setting where the insertion and query algorithms may use shared randomness, and where they are only required to perform well on average.
近似成员关系数据结构是一种随机数据结构,用于表示支持成员关系查询的集合。它允许一个小的假阳性错误率,但没有假阴性错误。这样的数据结构最初是由Bloom在20世纪70年代引入的,并且已经有了大量的应用,主要是在分布式系统、数据库系统和网络中。Bloom算法是非常有效的:它可以存储一个$S$大小的集合$n$,只使用$\approx 1.44 n \log_2(1/\epsilon)$位,同时有误报错误$\epsilon$。这在存储这样的集合的熵下界$n \log_2(1/\epsilon)$的一个常数因子之内。缩小这一差距是一个重要的开放问题,因为布隆过滤器在存储非常宝贵的情况下被广泛使用。布隆过滤器还有另一个属性:它们是动态的。也就是说,它们支持最多$n$元素的迭代插入。事实上,如果去掉这个要求,存在静态数据结构,它可以一次接收整个集合,几乎可以达到熵的下界,它们只需要$n \log_2(1/\epsilon)(1+o(1))$位。我们的主要结果是任何动态近似隶属度数据结构的内存需求的一个新的下界。我们表明,对于任何常数$\epsilon>0$,任何达到假阳性错误率$\epsilon$的此类数据结构必须至少使用$C(\epsilon) \cdot n \log_2(1/\epsilon)$内存位,其中$C(\epsilon)>1$仅依赖于$\epsilon$。这表明,对于任意恒定的错误率,动态数据结构无法达到熵的下界。事实上,我们的下界甚至适用于插入和查询算法可能使用共享随机性的情况,以及它们只需要平均表现良好的情况。