私教学习中放松差别隐私的基准:一项比较调查

IF 28 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS
Zhaolong Zheng, Lin Yao, Haibo Hu, Guowei Wu
{"title":"私教学习中放松差别隐私的基准:一项比较调查","authors":"Zhaolong Zheng, Lin Yao, Haibo Hu, Guowei Wu","doi":"10.1145/3729216","DOIUrl":null,"url":null,"abstract":"Differential privacy (DP), a rigorously quantifiable privacy preservation technique, has found widespread application within the domain of machine learning. As DP techniques are implemented in machine learning algorithms, a significant and intricate trade-off between privacy and utility emerges, garnering extensive attention from researchers. In the pursuit of striking a delicate equilibrium between safeguarding sensitive data and optimizing its utility, researchers have introduced various variants of Relaxed Differential Privacy (RDP) definitions. These nuanced formulations, however, exhibit substantial diversity in their underlying principles and interpretations of the core concept of DP, thereby engendering a current void in the comprehensive synthesis of these related works. The principal objective of this article is twofold. Firstly, it aims to provide a comprehensive summary of pertinent research endeavors pertaining to RDP within the realm of machine learning. Secondly, it endeavors to empirically assess the impact on both privacy and utility stemming from machine learning algorithms founded upon these RDP definitions. Additionally, this article undertakes a systematic analysis of the foundational principles underpinning distinct variants of relaxed definitions, culminating in the development of a taxonomy that categorizes these RDP definitions.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"8 1","pages":""},"PeriodicalIF":28.0000,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Benchmarking Relaxed Differential Privacy in Private Learning: A Comparative Survey\",\"authors\":\"Zhaolong Zheng, Lin Yao, Haibo Hu, Guowei Wu\",\"doi\":\"10.1145/3729216\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Differential privacy (DP), a rigorously quantifiable privacy preservation technique, has found widespread application within the domain of machine learning. As DP techniques are implemented in machine learning algorithms, a significant and intricate trade-off between privacy and utility emerges, garnering extensive attention from researchers. In the pursuit of striking a delicate equilibrium between safeguarding sensitive data and optimizing its utility, researchers have introduced various variants of Relaxed Differential Privacy (RDP) definitions. These nuanced formulations, however, exhibit substantial diversity in their underlying principles and interpretations of the core concept of DP, thereby engendering a current void in the comprehensive synthesis of these related works. The principal objective of this article is twofold. Firstly, it aims to provide a comprehensive summary of pertinent research endeavors pertaining to RDP within the realm of machine learning. Secondly, it endeavors to empirically assess the impact on both privacy and utility stemming from machine learning algorithms founded upon these RDP definitions. Additionally, this article undertakes a systematic analysis of the foundational principles underpinning distinct variants of relaxed definitions, culminating in the development of a taxonomy that categorizes these RDP definitions.\",\"PeriodicalId\":50926,\"journal\":{\"name\":\"ACM Computing Surveys\",\"volume\":\"8 1\",\"pages\":\"\"},\"PeriodicalIF\":28.0000,\"publicationDate\":\"2025-06-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Computing Surveys\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3729216\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Computing Surveys","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3729216","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

摘要

差分隐私(DP)是一种严格可量化的隐私保护技术,在机器学习领域得到了广泛的应用。随着DP技术在机器学习算法中的实现,隐私和效用之间出现了一个重要而复杂的权衡,引起了研究人员的广泛关注。为了在保护敏感数据和优化其效用之间取得微妙的平衡,研究人员引入了各种不同的放松差异隐私(RDP)定义。然而,这些微妙的表述在其基本原则和对DP核心概念的解释上表现出实质性的差异,因此在综合这些相关作品时产生了当前的空白。本文的主要目的有两个。首先,它旨在全面总结机器学习领域内与RDP相关的相关研究工作。其次,它试图从经验上评估基于这些RDP定义的机器学习算法对隐私和效用的影响。此外,本文还系统地分析了支持放松定义的不同变体的基本原则,并最终开发了对这些RDP定义进行分类的分类法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Benchmarking Relaxed Differential Privacy in Private Learning: A Comparative Survey
Differential privacy (DP), a rigorously quantifiable privacy preservation technique, has found widespread application within the domain of machine learning. As DP techniques are implemented in machine learning algorithms, a significant and intricate trade-off between privacy and utility emerges, garnering extensive attention from researchers. In the pursuit of striking a delicate equilibrium between safeguarding sensitive data and optimizing its utility, researchers have introduced various variants of Relaxed Differential Privacy (RDP) definitions. These nuanced formulations, however, exhibit substantial diversity in their underlying principles and interpretations of the core concept of DP, thereby engendering a current void in the comprehensive synthesis of these related works. The principal objective of this article is twofold. Firstly, it aims to provide a comprehensive summary of pertinent research endeavors pertaining to RDP within the realm of machine learning. Secondly, it endeavors to empirically assess the impact on both privacy and utility stemming from machine learning algorithms founded upon these RDP definitions. Additionally, this article undertakes a systematic analysis of the foundational principles underpinning distinct variants of relaxed definitions, culminating in the development of a taxonomy that categorizes these RDP definitions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ACM Computing Surveys
ACM Computing Surveys 工程技术-计算机:理论方法
CiteScore
33.20
自引率
0.60%
发文量
372
审稿时长
12 months
期刊介绍: ACM Computing Surveys is an academic journal that focuses on publishing surveys and tutorials on various areas of computing research and practice. The journal aims to provide comprehensive and easily understandable articles that guide readers through the literature and help them understand topics outside their specialties. In terms of impact, CSUR has a high reputation with a 2022 Impact Factor of 16.6. It is ranked 3rd out of 111 journals in the field of Computer Science Theory & Methods. ACM Computing Surveys is indexed and abstracted in various services, including AI2 Semantic Scholar, Baidu, Clarivate/ISI: JCR, CNKI, DeepDyve, DTU, EBSCO: EDS/HOST, and IET Inspec, among others.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信