Layer-Level Adaptive Gradient Perturbation Protecting Deep Learning Based on Differential Privacy

IF 7.3 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Zhang Xiangfei, Zhang Qingchen, Jiang Liming
{"title":"Layer-Level Adaptive Gradient Perturbation Protecting Deep Learning Based on Differential Privacy","authors":"Zhang Xiangfei,&nbsp;Zhang Qingchen,&nbsp;Jiang Liming","doi":"10.1049/cit2.70008","DOIUrl":null,"url":null,"abstract":"<p>Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information. Differential privacy stands out as a crucial method for preserving privacy, garnering significant interest for its ability to offer robust and verifiable privacy safeguards during data training. However, classic differentially private learning introduces the same level of noise into the gradients across training iterations, which affects the trade-off between model utility and privacy guarantees. To address this issue, an adaptive differential privacy mechanism was proposed in this paper, which dynamically adjusts the privacy budget at the layer-level as training progresses to resist member inference attacks. Specifically, an equal privacy budget is initially allocated to each layer. Subsequently, as training advances, the privacy budget for layers closer to the output is reduced (adding more noise), while the budget for layers closer to the input is increased. The adjustment magnitude depends on the training iterations and is automatically determined based on the iteration count. This dynamic allocation provides a simple process for adjusting privacy budgets, alleviating the burden on users to tweak parameters and ensuring that privacy preservation strategies align with training progress. Extensive experiments on five well-known datasets indicate that the proposed method outperforms competing methods in terms of accuracy and resilience against membership inference attacks.</p>","PeriodicalId":46211,"journal":{"name":"CAAI Transactions on Intelligence Technology","volume":"10 3","pages":"929-944"},"PeriodicalIF":7.3000,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cit2.70008","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"CAAI Transactions on Intelligence Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/cit2.70008","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information. Differential privacy stands out as a crucial method for preserving privacy, garnering significant interest for its ability to offer robust and verifiable privacy safeguards during data training. However, classic differentially private learning introduces the same level of noise into the gradients across training iterations, which affects the trade-off between model utility and privacy guarantees. To address this issue, an adaptive differential privacy mechanism was proposed in this paper, which dynamically adjusts the privacy budget at the layer-level as training progresses to resist member inference attacks. Specifically, an equal privacy budget is initially allocated to each layer. Subsequently, as training advances, the privacy budget for layers closer to the output is reduced (adding more noise), while the budget for layers closer to the input is increased. The adjustment magnitude depends on the training iterations and is automatically determined based on the iteration count. This dynamic allocation provides a simple process for adjusting privacy budgets, alleviating the burden on users to tweak parameters and ensuring that privacy preservation strategies align with training progress. Extensive experiments on five well-known datasets indicate that the proposed method outperforms competing methods in terms of accuracy and resilience against membership inference attacks.

Abstract Image

基于差分隐私的层级自适应梯度扰动保护深度学习
深度学习对大型数据集的广泛依赖引发了隐私问题,因为可能存在敏感信息。差分隐私作为保护隐私的一种重要方法,因其在数据训练期间提供强大且可验证的隐私保护的能力而引起了人们的极大兴趣。然而,经典的差分私有学习在跨训练迭代的梯度中引入了相同水平的噪声,这影响了模型效用和隐私保证之间的权衡。为了解决这一问题,本文提出了一种自适应差分隐私机制,该机制随着训练的进行在层级动态调整隐私预算,以抵御成员推理攻击。具体来说,最初为每一层分配了相同的隐私预算。随后,随着训练的进行,离输出更近的层的隐私预算减少(增加更多的噪声),而离输入更近的层的预算增加。调整幅度取决于训练迭代,并根据迭代次数自动确定。这种动态分配为调整隐私预算提供了一个简单的过程,减轻了用户调整参数的负担,并确保隐私保护策略与培训进度保持一致。在五个知名数据集上的大量实验表明,所提出的方法在准确性和抗隶属度推理攻击的弹性方面优于竞争方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CAAI Transactions on Intelligence Technology
CAAI Transactions on Intelligence Technology COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
11.00
自引率
3.90%
发文量
134
审稿时长
35 weeks
期刊介绍: CAAI Transactions on Intelligence Technology is a leading venue for original research on the theoretical and experimental aspects of artificial intelligence technology. We are a fully open access journal co-published by the Institution of Engineering and Technology (IET) and the Chinese Association for Artificial Intelligence (CAAI) providing research which is openly accessible to read and share worldwide.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信