Yixuan Liu, Yuhan Liu, Li Xiong, Yujie Gu, Hong Chen
{"title":"Enhanced Privacy Bound for Shuffle Model with Personalized Privacy.","authors":"Yixuan Liu, Yuhan Liu, Li Xiong, Yujie Gu, Hong Chen","doi":"10.1145/3627673.3679911","DOIUrl":null,"url":null,"abstract":"<p><p>The shuffle model of Differential Privacy (DP) is an enhanced privacy protocol which significantly amplifies the central DP guarantee by anonymizing and shuffling the local randomized data. Yet, deriving a tight privacy bound is challenging due to its complicated randomization protocol. While most existing works focused on uniform local privacy settings, this work focuses on a more practical personalized privacy setting. To bound the privacy after shuffling, we need to capture the probability of each user generating clones of the neighboring data points and quantify the indistinguishability between two distributions of the number of clones on neighboring datasets. Existing works either inaccurately capture the probability or underestimate the indistinguishability. We develop a more precise analysis, which yields a general and tighter bound for arbitrary DP mechanisms. Firstly, we derive the clone-generating probability by hypothesis testing, which leads to a more accurate characterization of the probability. Secondly, we analyze the indistinguishability in the context of <math><mi>f</mi></math> -DP, where the convexity of the distributions is leveraged to achieve a tighter privacy bound. Theoretical and numerical results demonstrate that our bound remarkably outperforms the existing results in the literature. The code is publicly available at https://github.com/Emory-AIMS/HPS.git.</p>","PeriodicalId":74507,"journal":{"name":"Proceedings of the ... ACM International Conference on Information & Knowledge Management. ACM International Conference on Information and Knowledge Management","volume":"2024 ","pages":"3907-3911"},"PeriodicalIF":0.0000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12094779/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... ACM International Conference on Information & Knowledge Management. ACM International Conference on Information and Knowledge Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3627673.3679911","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The shuffle model of Differential Privacy (DP) is an enhanced privacy protocol which significantly amplifies the central DP guarantee by anonymizing and shuffling the local randomized data. Yet, deriving a tight privacy bound is challenging due to its complicated randomization protocol. While most existing works focused on uniform local privacy settings, this work focuses on a more practical personalized privacy setting. To bound the privacy after shuffling, we need to capture the probability of each user generating clones of the neighboring data points and quantify the indistinguishability between two distributions of the number of clones on neighboring datasets. Existing works either inaccurately capture the probability or underestimate the indistinguishability. We develop a more precise analysis, which yields a general and tighter bound for arbitrary DP mechanisms. Firstly, we derive the clone-generating probability by hypothesis testing, which leads to a more accurate characterization of the probability. Secondly, we analyze the indistinguishability in the context of -DP, where the convexity of the distributions is leveraged to achieve a tighter privacy bound. Theoretical and numerical results demonstrate that our bound remarkably outperforms the existing results in the literature. The code is publicly available at https://github.com/Emory-AIMS/HPS.git.