差分隐私在实践中能否防止后门攻击?

Fereshteh Razmi, Jian Lou, Li Xiong
{"title":"差分隐私在实践中能否防止后门攻击?","authors":"Fereshteh Razmi, Jian Lou, Li Xiong","doi":"10.1007/978-3-031-65172-4_20","DOIUrl":null,"url":null,"abstract":"<p><p>Differential Privacy (DP) was originally developed to protect privacy. However, it has recently been utilized to secure machine learning (ML) models from poisoning attacks, with DP-SGD receiving substantial attention. Nevertheless, a thorough investigation is required to assess the effectiveness of different DP techniques in preventing backdoor attacks in practice. In this paper, we investigate the effectiveness of DP-SGD and, for the first time, examine PATE and Label-DP in the context of backdoor attacks. We also explore the role of different components of DP algorithms in defending against backdoor attacks and will show that PATE is effective against these attacks due to the bagging structure of the teacher models it employs. Our experiments reveal that hyper-parameters and the number of backdoors in the training dataset impact the success of DP algorithms. We also conclude that while Label-DP algorithms generally offer weaker privacy protection, accurate hyper-parameter tuning can make them more effective than DP methods in defending against backdoor attacks while maintaining model accuracy.</p>","PeriodicalId":520399,"journal":{"name":"Data and applications security and privacy XXXVIII : 38th Annual IFIP WG 11.3 Conference, DBSec 2024, San Jose, CA, USA, July 15-17, 2024, Proceedings. Annual IFIP WG 11.3 Working Conference on Data and Applications Security (38th : 202...","volume":"14901 ","pages":"320-340"},"PeriodicalIF":0.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12094713/pdf/","citationCount":"0","resultStr":"{\"title\":\"Does Differential Privacy Prevent Backdoor Attacks in Practice?\",\"authors\":\"Fereshteh Razmi, Jian Lou, Li Xiong\",\"doi\":\"10.1007/978-3-031-65172-4_20\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Differential Privacy (DP) was originally developed to protect privacy. However, it has recently been utilized to secure machine learning (ML) models from poisoning attacks, with DP-SGD receiving substantial attention. Nevertheless, a thorough investigation is required to assess the effectiveness of different DP techniques in preventing backdoor attacks in practice. In this paper, we investigate the effectiveness of DP-SGD and, for the first time, examine PATE and Label-DP in the context of backdoor attacks. We also explore the role of different components of DP algorithms in defending against backdoor attacks and will show that PATE is effective against these attacks due to the bagging structure of the teacher models it employs. Our experiments reveal that hyper-parameters and the number of backdoors in the training dataset impact the success of DP algorithms. We also conclude that while Label-DP algorithms generally offer weaker privacy protection, accurate hyper-parameter tuning can make them more effective than DP methods in defending against backdoor attacks while maintaining model accuracy.</p>\",\"PeriodicalId\":520399,\"journal\":{\"name\":\"Data and applications security and privacy XXXVIII : 38th Annual IFIP WG 11.3 Conference, DBSec 2024, San Jose, CA, USA, July 15-17, 2024, Proceedings. Annual IFIP WG 11.3 Working Conference on Data and Applications Security (38th : 202...\",\"volume\":\"14901 \",\"pages\":\"320-340\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12094713/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Data and applications security and privacy XXXVIII : 38th Annual IFIP WG 11.3 Conference, DBSec 2024, San Jose, CA, USA, July 15-17, 2024, Proceedings. Annual IFIP WG 11.3 Working Conference on Data and Applications Security (38th : 202...\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/978-3-031-65172-4_20\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/7/13 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data and applications security and privacy XXXVIII : 38th Annual IFIP WG 11.3 Conference, DBSec 2024, San Jose, CA, USA, July 15-17, 2024, Proceedings. Annual IFIP WG 11.3 Working Conference on Data and Applications Security (38th : 202...","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-031-65172-4_20","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/7/13 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

差分隐私(DP)最初是为了保护隐私而开发的。然而,它最近被用来保护机器学习(ML)模型免受中毒攻击,DP-SGD受到了大量关注。然而,需要进行彻底的调查,以评估不同的DP技术在实际中防止后门攻击的有效性。在本文中,我们研究了DP-SGD的有效性,并首次在后门攻击的背景下研究了PATE和Label-DP。我们还探讨了DP算法的不同组成部分在防御后门攻击中的作用,并将表明,由于它采用的教师模型的袋装结构,PATE对这些攻击是有效的。我们的实验表明,训练数据集中的超参数和后门数量会影响DP算法的成功。我们还得出结论,虽然Label-DP算法通常提供较弱的隐私保护,但准确的超参数调优可以使它们在保持模型准确性的同时,比DP方法更有效地防御后门攻击。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Does Differential Privacy Prevent Backdoor Attacks in Practice?

Differential Privacy (DP) was originally developed to protect privacy. However, it has recently been utilized to secure machine learning (ML) models from poisoning attacks, with DP-SGD receiving substantial attention. Nevertheless, a thorough investigation is required to assess the effectiveness of different DP techniques in preventing backdoor attacks in practice. In this paper, we investigate the effectiveness of DP-SGD and, for the first time, examine PATE and Label-DP in the context of backdoor attacks. We also explore the role of different components of DP algorithms in defending against backdoor attacks and will show that PATE is effective against these attacks due to the bagging structure of the teacher models it employs. Our experiments reveal that hyper-parameters and the number of backdoors in the training dataset impact the success of DP algorithms. We also conclude that while Label-DP algorithms generally offer weaker privacy protection, accurate hyper-parameter tuning can make them more effective than DP methods in defending against backdoor attacks while maintaining model accuracy.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信