GReDP:更稳健的梯度保护降噪差分隐私训练方法

Haodi Wang, Tangyu Jiang, Yu Guo, Xiaohua Jia, Chengjun Cai
{"title":"GReDP:更稳健的梯度保护降噪差分隐私训练方法","authors":"Haodi Wang, Tangyu Jiang, Yu Guo, Xiaohua Jia, Chengjun Cai","doi":"arxiv-2409.11663","DOIUrl":null,"url":null,"abstract":"Deep learning models have been extensively adopted in various regions due to\ntheir ability to represent hierarchical features, which highly rely on the\ntraining set and procedures. Thus, protecting the training process and deep\nlearning algorithms is paramount in privacy preservation. Although Differential\nPrivacy (DP) as a powerful cryptographic primitive has achieved satisfying\nresults in deep learning training, the existing schemes still fall short in\npreserving model utility, i.e., they either invoke a high noise scale or\ninevitably harm the original gradients. To address the above issues, in this\npaper, we present a more robust approach for DP training called GReDP.\nSpecifically, we compute the model gradients in the frequency domain and adopt\na new approach to reduce the noise level. Unlike the previous work, our GReDP\nonly requires half of the noise scale compared to DPSGD [1] while keeping all\nthe gradient information intact. We present a detailed analysis of our method\nboth theoretically and empirically. The experimental results show that our\nGReDP works consistently better than the baselines on all models and training\nsettings.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GReDP: A More Robust Approach for Differential Privacy Training with Gradient-Preserving Noise Reduction\",\"authors\":\"Haodi Wang, Tangyu Jiang, Yu Guo, Xiaohua Jia, Chengjun Cai\",\"doi\":\"arxiv-2409.11663\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning models have been extensively adopted in various regions due to\\ntheir ability to represent hierarchical features, which highly rely on the\\ntraining set and procedures. Thus, protecting the training process and deep\\nlearning algorithms is paramount in privacy preservation. Although Differential\\nPrivacy (DP) as a powerful cryptographic primitive has achieved satisfying\\nresults in deep learning training, the existing schemes still fall short in\\npreserving model utility, i.e., they either invoke a high noise scale or\\ninevitably harm the original gradients. To address the above issues, in this\\npaper, we present a more robust approach for DP training called GReDP.\\nSpecifically, we compute the model gradients in the frequency domain and adopt\\na new approach to reduce the noise level. Unlike the previous work, our GReDP\\nonly requires half of the noise scale compared to DPSGD [1] while keeping all\\nthe gradient information intact. We present a detailed analysis of our method\\nboth theoretically and empirically. The experimental results show that our\\nGReDP works consistently better than the baselines on all models and training\\nsettings.\",\"PeriodicalId\":501332,\"journal\":{\"name\":\"arXiv - CS - Cryptography and Security\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Cryptography and Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11663\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Cryptography and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11663","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

深度学习模型能够表示分层特征,高度依赖于训练集和训练过程,因此已被广泛应用于各个领域。因此,保护训练过程和深度学习算法是隐私保护的重中之重。尽管差分隐私(DifferentialPrivacy,DP)作为一种功能强大的加密原语在深度学习训练中取得了令人满意的结果,但现有方案在保护模型效用方面仍然存在不足,即要么调用了高噪声尺度,要么不可避免地损害了原始梯度。为了解决上述问题,我们在本文中提出了一种名为 GReDP 的更稳健的 DP 训练方法。具体来说,我们在频域中计算模型梯度,并采用一种新方法来降低噪声水平。与之前的工作不同,与 DPSGD [1] 相比,我们的 GReDP 只需要一半的噪声量级,同时还能完整地保留所有梯度信息。我们从理论和经验两方面对我们的方法进行了详细分析。实验结果表明,在所有模型和训练设置下,我们的 GReDP 始终优于基线方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
GReDP: A More Robust Approach for Differential Privacy Training with Gradient-Preserving Noise Reduction
Deep learning models have been extensively adopted in various regions due to their ability to represent hierarchical features, which highly rely on the training set and procedures. Thus, protecting the training process and deep learning algorithms is paramount in privacy preservation. Although Differential Privacy (DP) as a powerful cryptographic primitive has achieved satisfying results in deep learning training, the existing schemes still fall short in preserving model utility, i.e., they either invoke a high noise scale or inevitably harm the original gradients. To address the above issues, in this paper, we present a more robust approach for DP training called GReDP. Specifically, we compute the model gradients in the frequency domain and adopt a new approach to reduce the noise level. Unlike the previous work, our GReDP only requires half of the noise scale compared to DPSGD [1] while keeping all the gradient information intact. We present a detailed analysis of our method both theoretically and empirically. The experimental results show that our GReDP works consistently better than the baselines on all models and training settings.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信