{"title":"Defending against attacks in deep learning with differential privacy: a survey","authors":"Zhang Xiangfei, Zhang Qingchen","doi":"10.1007/s10462-025-11350-3","DOIUrl":null,"url":null,"abstract":"<div><p>Recently, we have witnessed the revolutionary development of deep learning. As the application domain of deep learning has expanded, its privacy risks have attracted attention since deep leaning methods often use private data for training. Some methods for attacking deep learning, such as membership inference attacks, increase the privacy risks of deep learning models. One risk-reducing defensive strategy with great potential is to apply some degree of random perturbation during the training (or other) phase. Therefore, differential privacy, as a privacy protection framework originally designed for publishing data, is widely used to protect the privacy of deep learning models due to its solid mathematical foundation. In this paper, we first introduce several attack methods that threaten deep learning. Then, we systematically review the cross-applications of differential privacy and deep learning to protect deep learning models. We encourage researchers to visually demonstrate the defense effects of their approaches in the literature rather than solely providing rigorous mathematical proofs. In addition to privacy, we also discuss and review the impact of differential privacy on the robustness, overfitting, and fairness of deep neural networks. Finally, we analyze some potential future research directions, highlighting the significant potential for differential privacy to make positive contributions to future deep learning systems.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 11","pages":""},"PeriodicalIF":13.9000,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11350-3.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-025-11350-3","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Recently, we have witnessed the revolutionary development of deep learning. As the application domain of deep learning has expanded, its privacy risks have attracted attention since deep leaning methods often use private data for training. Some methods for attacking deep learning, such as membership inference attacks, increase the privacy risks of deep learning models. One risk-reducing defensive strategy with great potential is to apply some degree of random perturbation during the training (or other) phase. Therefore, differential privacy, as a privacy protection framework originally designed for publishing data, is widely used to protect the privacy of deep learning models due to its solid mathematical foundation. In this paper, we first introduce several attack methods that threaten deep learning. Then, we systematically review the cross-applications of differential privacy and deep learning to protect deep learning models. We encourage researchers to visually demonstrate the defense effects of their approaches in the literature rather than solely providing rigorous mathematical proofs. In addition to privacy, we also discuss and review the impact of differential privacy on the robustness, overfitting, and fairness of deep neural networks. Finally, we analyze some potential future research directions, highlighting the significant potential for differential privacy to make positive contributions to future deep learning systems.
期刊介绍:
Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.