隐私保护的局部方法及其对公平的影响

C. Palamidessi
{"title":"隐私保护的局部方法及其对公平的影响","authors":"C. Palamidessi","doi":"10.1145/3577923.3587263","DOIUrl":null,"url":null,"abstract":"The increasingly pervasive use of big data and machine learning is raising various ethical issues, in particular privacy and fairness. In this talk, I will discuss some frameworks to understand and mitigate the issues, focusing on iterative methods coming from information theory and statistics. In the area of privacy protection, differential privacy (DP) and its variants are the most successful approaches to date. One of the fundamental issues of DP is how to reconcile the loss of information that it implies with the need to preserve the utility of the data. In this regard, a useful tool to recover utility is the iterative Bayesian update (IBU), an instance of the expectation-maximization method from statistics. I will show that the IBU, combined with a version of DP called d-\\emphprivacy (also known as metric differential privacy ), outperforms the state-of-the-art, which is based on algebraic methods combined with the randomized response mechanism, widely adopted by the Big Tech industry (Google, Apple, Amazon, ...). Then, I will discuss the issue of biased predictions in machine learning, and how DP can affect the level of fairness and accuracy of the trained model. Finally, I will show that the IBU can be applied also in this domain to ensure fairer treatment of disadvantaged groups and reconcile fairness and accuracy.","PeriodicalId":387479,"journal":{"name":"Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy","volume":"121 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Local Methods for Privacy Protection and Impact on Fairness\",\"authors\":\"C. Palamidessi\",\"doi\":\"10.1145/3577923.3587263\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The increasingly pervasive use of big data and machine learning is raising various ethical issues, in particular privacy and fairness. In this talk, I will discuss some frameworks to understand and mitigate the issues, focusing on iterative methods coming from information theory and statistics. In the area of privacy protection, differential privacy (DP) and its variants are the most successful approaches to date. One of the fundamental issues of DP is how to reconcile the loss of information that it implies with the need to preserve the utility of the data. In this regard, a useful tool to recover utility is the iterative Bayesian update (IBU), an instance of the expectation-maximization method from statistics. I will show that the IBU, combined with a version of DP called d-\\\\emphprivacy (also known as metric differential privacy ), outperforms the state-of-the-art, which is based on algebraic methods combined with the randomized response mechanism, widely adopted by the Big Tech industry (Google, Apple, Amazon, ...). Then, I will discuss the issue of biased predictions in machine learning, and how DP can affect the level of fairness and accuracy of the trained model. Finally, I will show that the IBU can be applied also in this domain to ensure fairer treatment of disadvantaged groups and reconcile fairness and accuracy.\",\"PeriodicalId\":387479,\"journal\":{\"name\":\"Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy\",\"volume\":\"121 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3577923.3587263\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3577923.3587263","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

大数据和机器学习的日益普及引发了各种道德问题,尤其是隐私和公平问题。在这次演讲中,我将讨论一些框架来理解和缓解问题,重点是来自信息论和统计学的迭代方法。在隐私保护领域,差分隐私(DP)及其变体是迄今为止最成功的方法。数据处理的一个基本问题是如何协调它所暗示的信息丢失与保持数据效用的需要。在这方面,恢复效用的一个有用工具是迭代贝叶斯更新(IBU),这是统计学中期望最大化方法的一个实例。我将展示IBU与称为d-\强调隐私(也称为度量差分隐私)的DP版本相结合,优于最先进的技术,该技术基于代数方法与随机响应机制相结合,被大型科技行业(谷歌,苹果,亚马逊等)广泛采用。然后,我将讨论机器学习中有偏见的预测问题,以及DP如何影响训练模型的公平性和准确性。最后,我将证明IBU也可以应用于这一领域,以确保更公平地对待弱势群体,并协调公平性和准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Local Methods for Privacy Protection and Impact on Fairness
The increasingly pervasive use of big data and machine learning is raising various ethical issues, in particular privacy and fairness. In this talk, I will discuss some frameworks to understand and mitigate the issues, focusing on iterative methods coming from information theory and statistics. In the area of privacy protection, differential privacy (DP) and its variants are the most successful approaches to date. One of the fundamental issues of DP is how to reconcile the loss of information that it implies with the need to preserve the utility of the data. In this regard, a useful tool to recover utility is the iterative Bayesian update (IBU), an instance of the expectation-maximization method from statistics. I will show that the IBU, combined with a version of DP called d-\emphprivacy (also known as metric differential privacy ), outperforms the state-of-the-art, which is based on algebraic methods combined with the randomized response mechanism, widely adopted by the Big Tech industry (Google, Apple, Amazon, ...). Then, I will discuss the issue of biased predictions in machine learning, and how DP can affect the level of fairness and accuracy of the trained model. Finally, I will show that the IBU can be applied also in this domain to ensure fairer treatment of disadvantaged groups and reconcile fairness and accuracy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信