{"title":"局部差分隐私下的分布仿真","authors":"S. Asoodeh","doi":"10.1109/cwit55308.2022.9817663","DOIUrl":null,"url":null,"abstract":"We investigate the problem of distribution simu-lation under local differential privacy: Alice and Bob observe sequences <tex>$X^{n}$</tex> and <tex>$Y^{n}$</tex> respectively, where <tex>$Y^{n}$</tex> is generated by a non-interactive <tex>$\\varepsilon$</tex> -Iocally differentially private (LDP) mechanism from <tex>$X^{n}$</tex>. The goal is for Alice and Bob to output <tex>$U$</tex> and <tex>$V$</tex> from a joint distribution that is close in total variation distance to a target distribution <tex>$P_{UV}$</tex>. As the main result, we show that such task is impossible if the hynercontractivity coefficient of <tex>$P_{UV}$</tex> is strictly bigger than <tex>$\\left(\\frac{e^{\\varepsilon}-1}{e^{\\varepsilon}+1}\\right)^{2}$</tex> . The proof of this result also leads to a new operational interpretation of LDP mechanisms: if <tex>$Y$</tex> is an output of an <tex>$\\varepsilon$</tex> -LDP mechanism with input <tex>$X$</tex>, then the probability of correctly guessing <tex>$f(X)$</tex> given <tex>$Y$</tex> is bigger than the probability of blind guessing only by <tex>$\\frac{e^{\\varepsilon}-1}{e^{\\varepsilon}+1}$</tex>, for any deterministic finitely-supported function <tex>$f$</tex> • If <tex>$f(X)$</tex> is continuous, then a similar result holds for the minimum mean-squared error in estimating <tex>$f(X)$</tex> given <tex>$Y$</tex>.","PeriodicalId":401562,"journal":{"name":"2022 17th Canadian Workshop on Information Theory (CWIT)","volume":"205 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Distribution Simulation Under Local Differential Privacy\",\"authors\":\"S. Asoodeh\",\"doi\":\"10.1109/cwit55308.2022.9817663\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We investigate the problem of distribution simu-lation under local differential privacy: Alice and Bob observe sequences <tex>$X^{n}$</tex> and <tex>$Y^{n}$</tex> respectively, where <tex>$Y^{n}$</tex> is generated by a non-interactive <tex>$\\\\varepsilon$</tex> -Iocally differentially private (LDP) mechanism from <tex>$X^{n}$</tex>. The goal is for Alice and Bob to output <tex>$U$</tex> and <tex>$V$</tex> from a joint distribution that is close in total variation distance to a target distribution <tex>$P_{UV}$</tex>. As the main result, we show that such task is impossible if the hynercontractivity coefficient of <tex>$P_{UV}$</tex> is strictly bigger than <tex>$\\\\left(\\\\frac{e^{\\\\varepsilon}-1}{e^{\\\\varepsilon}+1}\\\\right)^{2}$</tex> . The proof of this result also leads to a new operational interpretation of LDP mechanisms: if <tex>$Y$</tex> is an output of an <tex>$\\\\varepsilon$</tex> -LDP mechanism with input <tex>$X$</tex>, then the probability of correctly guessing <tex>$f(X)$</tex> given <tex>$Y$</tex> is bigger than the probability of blind guessing only by <tex>$\\\\frac{e^{\\\\varepsilon}-1}{e^{\\\\varepsilon}+1}$</tex>, for any deterministic finitely-supported function <tex>$f$</tex> • If <tex>$f(X)$</tex> is continuous, then a similar result holds for the minimum mean-squared error in estimating <tex>$f(X)$</tex> given <tex>$Y$</tex>.\",\"PeriodicalId\":401562,\"journal\":{\"name\":\"2022 17th Canadian Workshop on Information Theory (CWIT)\",\"volume\":\"205 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 17th Canadian Workshop on Information Theory (CWIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/cwit55308.2022.9817663\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 17th Canadian Workshop on Information Theory (CWIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/cwit55308.2022.9817663","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Distribution Simulation Under Local Differential Privacy
We investigate the problem of distribution simu-lation under local differential privacy: Alice and Bob observe sequences $X^{n}$ and $Y^{n}$ respectively, where $Y^{n}$ is generated by a non-interactive $\varepsilon$ -Iocally differentially private (LDP) mechanism from $X^{n}$. The goal is for Alice and Bob to output $U$ and $V$ from a joint distribution that is close in total variation distance to a target distribution $P_{UV}$. As the main result, we show that such task is impossible if the hynercontractivity coefficient of $P_{UV}$ is strictly bigger than $\left(\frac{e^{\varepsilon}-1}{e^{\varepsilon}+1}\right)^{2}$ . The proof of this result also leads to a new operational interpretation of LDP mechanisms: if $Y$ is an output of an $\varepsilon$ -LDP mechanism with input $X$, then the probability of correctly guessing $f(X)$ given $Y$ is bigger than the probability of blind guessing only by $\frac{e^{\varepsilon}-1}{e^{\varepsilon}+1}$, for any deterministic finitely-supported function $f$ • If $f(X)$ is continuous, then a similar result holds for the minimum mean-squared error in estimating $f(X)$ given $Y$.