Gradient-Level Differential Privacy Against Attribute Inference Attack for Speech Emotion Recognition

IF 3.2 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Haijiao Chen;Huan Zhao;Zixing Zhang
{"title":"Gradient-Level Differential Privacy Against Attribute Inference Attack for Speech Emotion Recognition","authors":"Haijiao Chen;Huan Zhao;Zixing Zhang","doi":"10.1109/LSP.2024.3490379","DOIUrl":null,"url":null,"abstract":"The Federated Learning (FL) paradigm for distributed privacy preservation is valued for its ability to collaboratively train Speech Emotion Recognition (SER) models while keeping data localized. However, recent studies reveal privacy leakage in the model sharing process. Existing differential privacy schemes face increasing inference attack risks as clients expose more model updates. To address these challenges, we propose a \n<underline>G</u>\nradient-level \n<underline>H</u>\nierarchical \n<underline>D</u>\nifferential \n<underline>P</u>\nrivacy (GHDP) strategy to mitigate attribute inference attacks. GHDP employs normalization to distinguish gradient importance, clipping significant gradients and filtering out sensitive information that may lead to privacy leaks. Additionally, increased random perturbations are applied to early model layers during backpropagation, achieving hierarchical differential privacy through layered noise addition. This theoretically grounded approach offers enhanced protection for critical information. Our experiments show that GHDP maintains stable SER performance while providing robust privacy protection, unaffected by the number of model updates.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"31 ","pages":"3124-3128"},"PeriodicalIF":3.2000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10740800/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

The Federated Learning (FL) paradigm for distributed privacy preservation is valued for its ability to collaboratively train Speech Emotion Recognition (SER) models while keeping data localized. However, recent studies reveal privacy leakage in the model sharing process. Existing differential privacy schemes face increasing inference attack risks as clients expose more model updates. To address these challenges, we propose a G radient-level H ierarchical D ifferential P rivacy (GHDP) strategy to mitigate attribute inference attacks. GHDP employs normalization to distinguish gradient importance, clipping significant gradients and filtering out sensitive information that may lead to privacy leaks. Additionally, increased random perturbations are applied to early model layers during backpropagation, achieving hierarchical differential privacy through layered noise addition. This theoretically grounded approach offers enhanced protection for critical information. Our experiments show that GHDP maintains stable SER performance while providing robust privacy protection, unaffected by the number of model updates.
针对语音情感识别的属性推理攻击的渐变级差异隐私保护
用于分布式隐私保护的联合学习(FL)范式因其在保持数据本地化的同时协同训练语音情感识别(SER)模型的能力而备受推崇。然而,最近的研究揭示了模型共享过程中的隐私泄露问题。随着客户端暴露出更多的模型更新,现有的差分隐私方案面临着越来越大的推理攻击风险。为了应对这些挑战,我们提出了梯度级分层差分隐私(GHDP)策略,以减轻属性推断攻击。GHDP 采用归一化来区分梯度的重要性,剪切重要梯度并过滤掉可能导致隐私泄露的敏感信息。此外,在反向传播过程中,增加的随机扰动会应用到早期模型层,通过分层噪声添加实现分层差异隐私。这种以理论为基础的方法为关键信息提供了更强的保护。我们的实验表明,GHDP 能保持稳定的 SER 性能,同时提供稳健的隐私保护,不受模型更新次数的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Signal Processing Letters
IEEE Signal Processing Letters 工程技术-工程:电子与电气
CiteScore
7.40
自引率
12.80%
发文量
339
审稿时长
2.8 months
期刊介绍: The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信