{"title":"Layer-Level Adaptive Gradient Perturbation Protecting Deep Learning Based on Differential Privacy","authors":"Zhang Xiangfei, Zhang Qingchen, Jiang Liming","doi":"10.1049/cit2.70008","DOIUrl":null,"url":null,"abstract":"<p>Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information. Differential privacy stands out as a crucial method for preserving privacy, garnering significant interest for its ability to offer robust and verifiable privacy safeguards during data training. However, classic differentially private learning introduces the same level of noise into the gradients across training iterations, which affects the trade-off between model utility and privacy guarantees. To address this issue, an adaptive differential privacy mechanism was proposed in this paper, which dynamically adjusts the privacy budget at the layer-level as training progresses to resist member inference attacks. Specifically, an equal privacy budget is initially allocated to each layer. Subsequently, as training advances, the privacy budget for layers closer to the output is reduced (adding more noise), while the budget for layers closer to the input is increased. The adjustment magnitude depends on the training iterations and is automatically determined based on the iteration count. This dynamic allocation provides a simple process for adjusting privacy budgets, alleviating the burden on users to tweak parameters and ensuring that privacy preservation strategies align with training progress. Extensive experiments on five well-known datasets indicate that the proposed method outperforms competing methods in terms of accuracy and resilience against membership inference attacks.</p>","PeriodicalId":46211,"journal":{"name":"CAAI Transactions on Intelligence Technology","volume":"10 3","pages":"929-944"},"PeriodicalIF":7.3000,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cit2.70008","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"CAAI Transactions on Intelligence Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/cit2.70008","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information. Differential privacy stands out as a crucial method for preserving privacy, garnering significant interest for its ability to offer robust and verifiable privacy safeguards during data training. However, classic differentially private learning introduces the same level of noise into the gradients across training iterations, which affects the trade-off between model utility and privacy guarantees. To address this issue, an adaptive differential privacy mechanism was proposed in this paper, which dynamically adjusts the privacy budget at the layer-level as training progresses to resist member inference attacks. Specifically, an equal privacy budget is initially allocated to each layer. Subsequently, as training advances, the privacy budget for layers closer to the output is reduced (adding more noise), while the budget for layers closer to the input is increased. The adjustment magnitude depends on the training iterations and is automatically determined based on the iteration count. This dynamic allocation provides a simple process for adjusting privacy budgets, alleviating the burden on users to tweak parameters and ensuring that privacy preservation strategies align with training progress. Extensive experiments on five well-known datasets indicate that the proposed method outperforms competing methods in terms of accuracy and resilience against membership inference attacks.
期刊介绍:
CAAI Transactions on Intelligence Technology is a leading venue for original research on the theoretical and experimental aspects of artificial intelligence technology. We are a fully open access journal co-published by the Institution of Engineering and Technology (IET) and the Chinese Association for Artificial Intelligence (CAAI) providing research which is openly accessible to read and share worldwide.