{"title":"差分隐私在实践中能否防止后门攻击?","authors":"Fereshteh Razmi, Jian Lou, Li Xiong","doi":"10.1007/978-3-031-65172-4_20","DOIUrl":null,"url":null,"abstract":"<p><p>Differential Privacy (DP) was originally developed to protect privacy. However, it has recently been utilized to secure machine learning (ML) models from poisoning attacks, with DP-SGD receiving substantial attention. Nevertheless, a thorough investigation is required to assess the effectiveness of different DP techniques in preventing backdoor attacks in practice. In this paper, we investigate the effectiveness of DP-SGD and, for the first time, examine PATE and Label-DP in the context of backdoor attacks. We also explore the role of different components of DP algorithms in defending against backdoor attacks and will show that PATE is effective against these attacks due to the bagging structure of the teacher models it employs. Our experiments reveal that hyper-parameters and the number of backdoors in the training dataset impact the success of DP algorithms. We also conclude that while Label-DP algorithms generally offer weaker privacy protection, accurate hyper-parameter tuning can make them more effective than DP methods in defending against backdoor attacks while maintaining model accuracy.</p>","PeriodicalId":520399,"journal":{"name":"Data and applications security and privacy XXXVIII : 38th Annual IFIP WG 11.3 Conference, DBSec 2024, San Jose, CA, USA, July 15-17, 2024, Proceedings. Annual IFIP WG 11.3 Working Conference on Data and Applications Security (38th : 202...","volume":"14901 ","pages":"320-340"},"PeriodicalIF":0.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12094713/pdf/","citationCount":"0","resultStr":"{\"title\":\"Does Differential Privacy Prevent Backdoor Attacks in Practice?\",\"authors\":\"Fereshteh Razmi, Jian Lou, Li Xiong\",\"doi\":\"10.1007/978-3-031-65172-4_20\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Differential Privacy (DP) was originally developed to protect privacy. However, it has recently been utilized to secure machine learning (ML) models from poisoning attacks, with DP-SGD receiving substantial attention. Nevertheless, a thorough investigation is required to assess the effectiveness of different DP techniques in preventing backdoor attacks in practice. In this paper, we investigate the effectiveness of DP-SGD and, for the first time, examine PATE and Label-DP in the context of backdoor attacks. We also explore the role of different components of DP algorithms in defending against backdoor attacks and will show that PATE is effective against these attacks due to the bagging structure of the teacher models it employs. Our experiments reveal that hyper-parameters and the number of backdoors in the training dataset impact the success of DP algorithms. We also conclude that while Label-DP algorithms generally offer weaker privacy protection, accurate hyper-parameter tuning can make them more effective than DP methods in defending against backdoor attacks while maintaining model accuracy.</p>\",\"PeriodicalId\":520399,\"journal\":{\"name\":\"Data and applications security and privacy XXXVIII : 38th Annual IFIP WG 11.3 Conference, DBSec 2024, San Jose, CA, USA, July 15-17, 2024, Proceedings. Annual IFIP WG 11.3 Working Conference on Data and Applications Security (38th : 202...\",\"volume\":\"14901 \",\"pages\":\"320-340\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12094713/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Data and applications security and privacy XXXVIII : 38th Annual IFIP WG 11.3 Conference, DBSec 2024, San Jose, CA, USA, July 15-17, 2024, Proceedings. Annual IFIP WG 11.3 Working Conference on Data and Applications Security (38th : 202...\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/978-3-031-65172-4_20\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/7/13 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data and applications security and privacy XXXVIII : 38th Annual IFIP WG 11.3 Conference, DBSec 2024, San Jose, CA, USA, July 15-17, 2024, Proceedings. Annual IFIP WG 11.3 Working Conference on Data and Applications Security (38th : 202...","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-031-65172-4_20","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/7/13 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
Does Differential Privacy Prevent Backdoor Attacks in Practice?
Differential Privacy (DP) was originally developed to protect privacy. However, it has recently been utilized to secure machine learning (ML) models from poisoning attacks, with DP-SGD receiving substantial attention. Nevertheless, a thorough investigation is required to assess the effectiveness of different DP techniques in preventing backdoor attacks in practice. In this paper, we investigate the effectiveness of DP-SGD and, for the first time, examine PATE and Label-DP in the context of backdoor attacks. We also explore the role of different components of DP algorithms in defending against backdoor attacks and will show that PATE is effective against these attacks due to the bagging structure of the teacher models it employs. Our experiments reveal that hyper-parameters and the number of backdoors in the training dataset impact the success of DP algorithms. We also conclude that while Label-DP algorithms generally offer weaker privacy protection, accurate hyper-parameter tuning can make them more effective than DP methods in defending against backdoor attacks while maintaining model accuracy.