{"title":"Adversary Resilient Learned Bloom Filters","authors":"Allison Bishop, Hayder Tirmazi","doi":"arxiv-2409.06556","DOIUrl":null,"url":null,"abstract":"Creating an adversary resilient Learned Bloom Filter\n\\cite{learnedindexstructures} with provable guarantees is an open problem\n\\cite{reviriego1}. We define a strong adversarial model for the Learned Bloom\nFilter. We also construct two adversary resilient variants of the Learned Bloom\nFilter called the Uptown Bodega Filter and the Downtown Bodega Filter. Our\nadversarial model extends an existing adversarial model designed for the\nClassical (i.e not ``Learned'') Bloom Filter by Naor Yogev~\\cite{moni1} and\nconsiders computationally bounded adversaries that run in probabilistic\npolynomial time (PPT). We show that if pseudo-random permutations exist, then a\nsecure Learned Bloom Filter may be constructed with $\\lambda$ extra bits of\nmemory and at most one extra pseudo-random permutation in the critical path. We\nfurther show that, if pseudo-random permutations exist, then a \\textit{high\nutility} Learned Bloom Filter may be constructed with $2\\lambda$ extra bits of\nmemory and at most one extra pseudo-random permutation in the critical path.\nFinally, we construct a hybrid adversarial model for the case where a fraction\nof the workload is chosen by an adversary. We show realistic scenarios where\nusing the Downtown Bodega Filter gives better performance guarantees compared\nto alternative approaches in this hybrid model.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"66 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Data Structures and Algorithms","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06556","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Creating an adversary resilient Learned Bloom Filter
\cite{learnedindexstructures} with provable guarantees is an open problem
\cite{reviriego1}. We define a strong adversarial model for the Learned Bloom
Filter. We also construct two adversary resilient variants of the Learned Bloom
Filter called the Uptown Bodega Filter and the Downtown Bodega Filter. Our
adversarial model extends an existing adversarial model designed for the
Classical (i.e not ``Learned'') Bloom Filter by Naor Yogev~\cite{moni1} and
considers computationally bounded adversaries that run in probabilistic
polynomial time (PPT). We show that if pseudo-random permutations exist, then a
secure Learned Bloom Filter may be constructed with $\lambda$ extra bits of
memory and at most one extra pseudo-random permutation in the critical path. We
further show that, if pseudo-random permutations exist, then a \textit{high
utility} Learned Bloom Filter may be constructed with $2\lambda$ extra bits of
memory and at most one extra pseudo-random permutation in the critical path.
Finally, we construct a hybrid adversarial model for the case where a fraction
of the workload is chosen by an adversary. We show realistic scenarios where
using the Downtown Bodega Filter gives better performance guarantees compared
to alternative approaches in this hybrid model.