{"title":"Gradient-Level Differential Privacy Against Attribute Inference Attack for Speech Emotion Recognition","authors":"Haijiao Chen;Huan Zhao;Zixing Zhang","doi":"10.1109/LSP.2024.3490379","DOIUrl":null,"url":null,"abstract":"The Federated Learning (FL) paradigm for distributed privacy preservation is valued for its ability to collaboratively train Speech Emotion Recognition (SER) models while keeping data localized. However, recent studies reveal privacy leakage in the model sharing process. Existing differential privacy schemes face increasing inference attack risks as clients expose more model updates. To address these challenges, we propose a \n<underline>G</u>\nradient-level \n<underline>H</u>\nierarchical \n<underline>D</u>\nifferential \n<underline>P</u>\nrivacy (GHDP) strategy to mitigate attribute inference attacks. GHDP employs normalization to distinguish gradient importance, clipping significant gradients and filtering out sensitive information that may lead to privacy leaks. Additionally, increased random perturbations are applied to early model layers during backpropagation, achieving hierarchical differential privacy through layered noise addition. This theoretically grounded approach offers enhanced protection for critical information. Our experiments show that GHDP maintains stable SER performance while providing robust privacy protection, unaffected by the number of model updates.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"31 ","pages":"3124-3128"},"PeriodicalIF":3.2000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10740800/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
The Federated Learning (FL) paradigm for distributed privacy preservation is valued for its ability to collaboratively train Speech Emotion Recognition (SER) models while keeping data localized. However, recent studies reveal privacy leakage in the model sharing process. Existing differential privacy schemes face increasing inference attack risks as clients expose more model updates. To address these challenges, we propose a
G
radient-level
H
ierarchical
D
ifferential
P
rivacy (GHDP) strategy to mitigate attribute inference attacks. GHDP employs normalization to distinguish gradient importance, clipping significant gradients and filtering out sensitive information that may lead to privacy leaks. Additionally, increased random perturbations are applied to early model layers during backpropagation, achieving hierarchical differential privacy through layered noise addition. This theoretically grounded approach offers enhanced protection for critical information. Our experiments show that GHDP maintains stable SER performance while providing robust privacy protection, unaffected by the number of model updates.
用于分布式隐私保护的联合学习(FL)范式因其在保持数据本地化的同时协同训练语音情感识别(SER)模型的能力而备受推崇。然而,最近的研究揭示了模型共享过程中的隐私泄露问题。随着客户端暴露出更多的模型更新,现有的差分隐私方案面临着越来越大的推理攻击风险。为了应对这些挑战,我们提出了梯度级分层差分隐私(GHDP)策略,以减轻属性推断攻击。GHDP 采用归一化来区分梯度的重要性,剪切重要梯度并过滤掉可能导致隐私泄露的敏感信息。此外,在反向传播过程中,增加的随机扰动会应用到早期模型层,通过分层噪声添加实现分层差异隐私。这种以理论为基础的方法为关键信息提供了更强的保护。我们的实验表明,GHDP 能保持稳定的 SER 性能,同时提供稳健的隐私保护,不受模型更新次数的影响。
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.