利用差分隐私防御深度学习中的攻击:一项调查

IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Zhang Xiangfei, Zhang Qingchen
{"title":"利用差分隐私防御深度学习中的攻击:一项调查","authors":"Zhang Xiangfei,&nbsp;Zhang Qingchen","doi":"10.1007/s10462-025-11350-3","DOIUrl":null,"url":null,"abstract":"<div><p>Recently, we have witnessed the revolutionary development of deep learning. As the application domain of deep learning has expanded, its privacy risks have attracted attention since deep leaning methods often use private data for training. Some methods for attacking deep learning, such as membership inference attacks, increase the privacy risks of deep learning models. One risk-reducing defensive strategy with great potential is to apply some degree of random perturbation during the training (or other) phase. Therefore, differential privacy, as a privacy protection framework originally designed for publishing data, is widely used to protect the privacy of deep learning models due to its solid mathematical foundation. In this paper, we first introduce several attack methods that threaten deep learning. Then, we systematically review the cross-applications of differential privacy and deep learning to protect deep learning models. We encourage researchers to visually demonstrate the defense effects of their approaches in the literature rather than solely providing rigorous mathematical proofs. In addition to privacy, we also discuss and review the impact of differential privacy on the robustness, overfitting, and fairness of deep neural networks. Finally, we analyze some potential future research directions, highlighting the significant potential for differential privacy to make positive contributions to future deep learning systems.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 11","pages":""},"PeriodicalIF":13.9000,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11350-3.pdf","citationCount":"0","resultStr":"{\"title\":\"Defending against attacks in deep learning with differential privacy: a survey\",\"authors\":\"Zhang Xiangfei,&nbsp;Zhang Qingchen\",\"doi\":\"10.1007/s10462-025-11350-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Recently, we have witnessed the revolutionary development of deep learning. As the application domain of deep learning has expanded, its privacy risks have attracted attention since deep leaning methods often use private data for training. Some methods for attacking deep learning, such as membership inference attacks, increase the privacy risks of deep learning models. One risk-reducing defensive strategy with great potential is to apply some degree of random perturbation during the training (or other) phase. Therefore, differential privacy, as a privacy protection framework originally designed for publishing data, is widely used to protect the privacy of deep learning models due to its solid mathematical foundation. In this paper, we first introduce several attack methods that threaten deep learning. Then, we systematically review the cross-applications of differential privacy and deep learning to protect deep learning models. We encourage researchers to visually demonstrate the defense effects of their approaches in the literature rather than solely providing rigorous mathematical proofs. In addition to privacy, we also discuss and review the impact of differential privacy on the robustness, overfitting, and fairness of deep neural networks. Finally, we analyze some potential future research directions, highlighting the significant potential for differential privacy to make positive contributions to future deep learning systems.</p></div>\",\"PeriodicalId\":8449,\"journal\":{\"name\":\"Artificial Intelligence Review\",\"volume\":\"58 11\",\"pages\":\"\"},\"PeriodicalIF\":13.9000,\"publicationDate\":\"2025-08-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10462-025-11350-3.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence Review\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10462-025-11350-3\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-025-11350-3","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

最近,我们见证了深度学习的革命性发展。随着深度学习应用领域的扩大,由于深度学习方法经常使用私人数据进行训练,其隐私风险引起了人们的关注。一些攻击深度学习的方法,如成员推理攻击,增加了深度学习模型的隐私风险。一个具有巨大潜力的降低风险的防御策略是在训练(或其他)阶段应用某种程度的随机扰动。因此,差分隐私作为一种最初为发布数据而设计的隐私保护框架,由于其坚实的数学基础,被广泛用于深度学习模型的隐私保护。在本文中,我们首先介绍了几种威胁深度学习的攻击方法。然后,我们系统地回顾了差分隐私和深度学习在保护深度学习模型方面的交叉应用。我们鼓励研究人员在文献中直观地展示他们的方法的防御效果,而不是仅仅提供严格的数学证明。除了隐私,我们还讨论和回顾了差分隐私对深度神经网络的鲁棒性、过拟合和公平性的影响。最后,我们分析了一些潜在的未来研究方向,强调了差分隐私对未来深度学习系统做出积极贡献的重要潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Defending against attacks in deep learning with differential privacy: a survey

Recently, we have witnessed the revolutionary development of deep learning. As the application domain of deep learning has expanded, its privacy risks have attracted attention since deep leaning methods often use private data for training. Some methods for attacking deep learning, such as membership inference attacks, increase the privacy risks of deep learning models. One risk-reducing defensive strategy with great potential is to apply some degree of random perturbation during the training (or other) phase. Therefore, differential privacy, as a privacy protection framework originally designed for publishing data, is widely used to protect the privacy of deep learning models due to its solid mathematical foundation. In this paper, we first introduce several attack methods that threaten deep learning. Then, we systematically review the cross-applications of differential privacy and deep learning to protect deep learning models. We encourage researchers to visually demonstrate the defense effects of their approaches in the literature rather than solely providing rigorous mathematical proofs. In addition to privacy, we also discuss and review the impact of differential privacy on the robustness, overfitting, and fairness of deep neural networks. Finally, we analyze some potential future research directions, highlighting the significant potential for differential privacy to make positive contributions to future deep learning systems.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Artificial Intelligence Review
Artificial Intelligence Review 工程技术-计算机:人工智能
CiteScore
22.00
自引率
3.30%
发文量
194
审稿时长
5.3 months
期刊介绍: Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信