Deciding Differential Privacy for Programs with Finite Inputs and Outputs.

G. Barthe, Rohit Chadha, V. Jagannath, A. Sistla, Mahesh Viswanathan
{"title":"Deciding Differential Privacy for Programs with Finite Inputs and Outputs.","authors":"G. Barthe, Rohit Chadha, V. Jagannath, A. Sistla, Mahesh Viswanathan","doi":"10.1145/3373718.339479","DOIUrl":null,"url":null,"abstract":"Differential privacy is a de facto standard for statistical computations over databases that contain private data. The strength of differential privacy lies in a rigorous mathematical definition that guarantees individual privacy and yet allows for accurate statistical results. Thanks to its mathematical definition, differential privacy is also a natural target for formal analysis. A broad line of work uses logical methods for proving privacy. However, these methods are not complete, and only partially automated. A recent and complementary line of work uses statistical methods for finding privacy violations. However, the methods only provide statistical guarantees (but no proofs). \nWe propose the first decision procedure for checking the differential privacy of a non-trivial class of probabilistic computations. Our procedure takes as input a program P parametrized by a privacy budget $\\epsilon$, and either proves differential privacy for all possible values of $\\epsilon$ or generates a counterexample. In addition, our procedure applies both to $\\epsilon$-differential privacy and $(\\epsilon,\\delta)$-differential privacy. Technically, the decision procedure is based on a novel and judicious encoding of the semantics of programs in our class into a decidable fragment of the first-order theory of the reals with exponentiation. We implement our procedure and use it for (dis)proving privacy bounds for many well-known examples, including randomized response, histogram, report noisy max and sparse vector.","PeriodicalId":420133,"journal":{"name":"arXiv: Cryptography and Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv: Cryptography and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3373718.339479","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13

Abstract

Differential privacy is a de facto standard for statistical computations over databases that contain private data. The strength of differential privacy lies in a rigorous mathematical definition that guarantees individual privacy and yet allows for accurate statistical results. Thanks to its mathematical definition, differential privacy is also a natural target for formal analysis. A broad line of work uses logical methods for proving privacy. However, these methods are not complete, and only partially automated. A recent and complementary line of work uses statistical methods for finding privacy violations. However, the methods only provide statistical guarantees (but no proofs). We propose the first decision procedure for checking the differential privacy of a non-trivial class of probabilistic computations. Our procedure takes as input a program P parametrized by a privacy budget $\epsilon$, and either proves differential privacy for all possible values of $\epsilon$ or generates a counterexample. In addition, our procedure applies both to $\epsilon$-differential privacy and $(\epsilon,\delta)$-differential privacy. Technically, the decision procedure is based on a novel and judicious encoding of the semantics of programs in our class into a decidable fragment of the first-order theory of the reals with exponentiation. We implement our procedure and use it for (dis)proving privacy bounds for many well-known examples, including randomized response, histogram, report noisy max and sparse vector.
有限输入输出程序的微分隐私判定。
差分隐私是对包含私有数据的数据库进行统计计算的事实上的标准。差分隐私的优势在于严格的数学定义,既保证了个人隐私,又允许准确的统计结果。由于其数学定义,差分隐私也是形式化分析的自然目标。广泛的工作使用逻辑方法来证明隐私。然而,这些方法是不完整的,只是部分自动化的。最近的一项补充工作是使用统计方法来发现侵犯隐私的行为。然而,这些方法只提供统计保证(而没有证明)。我们提出了检验一类非平凡概率计算的微分隐私性的第一个决策过程。我们的过程将一个由隐私预算$\epsilon$参数化的程序P作为输入,并证明$\epsilon$的所有可能值的差分隐私或生成一个反例。此外,我们的程序适用于$\epsilon$ -差分隐私和$(\epsilon,\delta)$ -差分隐私。从技术上讲,决策过程是基于一种新颖而明智的编码,将我们类程序的语义编码为一阶带幂实数理论的可决定片段。我们实现了我们的过程,并使用它来(反)证明隐私界限的许多著名的例子,包括随机响应,直方图,报告噪声最大值和稀疏向量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信