LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference Attacks

Mengyao Ma, Yanjun Zhang, Pathum Chamikara Mahawaga Arachchige, L. Zhang, Mohan Baruwal Chhetri, Guangdong Bai
{"title":"LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference Attacks","authors":"Mengyao Ma, Yanjun Zhang, Pathum Chamikara Mahawaga Arachchige, L. Zhang, Mohan Baruwal Chhetri, Guangdong Bai","doi":"10.1145/3579856.3590334","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) is a widely used distributed machine learning framework. However, recent studies have shown its susceptibility to poisoning membership inference attacks (MIA). In MIA, adversaries maliciously manipulate the local updates on selected samples and share the gradients with the server (i.e., poisoning). Since honest clients perform gradient descent on samples locally, an adversary can distinguish whether the attacked sample is a training sample based on observation of the change of the sample’s prediction. This type of attack exacerbates traditional passive MIA, yet the defense mechanisms remain largely unexplored. In this work, we first investigate the effectiveness of the existing server-side robust aggregation algorithms (AGRs), designed to counter general poisoning attacks, in defending against poisoning MIA. We find that they are largely insufficient in mitigating poisoning MIA, as it targets specific victim samples and has minimal impact on model performance, unlike general poisoning. Thus, we propose a new client-side defense mechanism, called LoDen, which leverages the clients’ unique ability to detect any suspicious privacy attacks. We theoretically quantify the membership information leaked to the poisoning MIA and provide a bound for this leakage in LoDen. We perform an extensive experimental evaluation on four benchmark datasets against poisoning MIA, comparing LoDen with six state-of-the-art server-side AGRs. LoDen consistently achieves missing rate in detecting poisoning MIA across all settings, and reduces the poisoning MIA success rate to in most cases. The code of LoDen is available at https://github.com/UQ-Trust-Lab/LoDen.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3579856.3590334","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Federated learning (FL) is a widely used distributed machine learning framework. However, recent studies have shown its susceptibility to poisoning membership inference attacks (MIA). In MIA, adversaries maliciously manipulate the local updates on selected samples and share the gradients with the server (i.e., poisoning). Since honest clients perform gradient descent on samples locally, an adversary can distinguish whether the attacked sample is a training sample based on observation of the change of the sample’s prediction. This type of attack exacerbates traditional passive MIA, yet the defense mechanisms remain largely unexplored. In this work, we first investigate the effectiveness of the existing server-side robust aggregation algorithms (AGRs), designed to counter general poisoning attacks, in defending against poisoning MIA. We find that they are largely insufficient in mitigating poisoning MIA, as it targets specific victim samples and has minimal impact on model performance, unlike general poisoning. Thus, we propose a new client-side defense mechanism, called LoDen, which leverages the clients’ unique ability to detect any suspicious privacy attacks. We theoretically quantify the membership information leaked to the poisoning MIA and provide a bound for this leakage in LoDen. We perform an extensive experimental evaluation on four benchmark datasets against poisoning MIA, comparing LoDen with six state-of-the-art server-side AGRs. LoDen consistently achieves missing rate in detecting poisoning MIA across all settings, and reduces the poisoning MIA success rate to in most cases. The code of LoDen is available at https://github.com/UQ-Trust-Lab/LoDen.
LoDen:使联邦学习中的每个客户端都成为对抗中毒成员推理攻击的防御者
联邦学习(FL)是一种广泛使用的分布式机器学习框架。然而,近年来的研究表明其对中毒隶属推理攻击(MIA)的易感性。在MIA中,攻击者恶意操纵选定样本的本地更新,并与服务器共享梯度(即中毒)。由于诚实的客户端在局部对样本执行梯度下降,攻击者可以根据观察样本预测的变化来区分被攻击的样本是否是训练样本。这种类型的攻击加剧了传统的被动MIA,但防御机制在很大程度上仍未被探索。在这项工作中,我们首先研究了现有的服务器端鲁棒聚合算法(agr)的有效性,该算法旨在对抗一般投毒攻击,以防御投毒MIA。我们发现它们在很大程度上不足以减轻中毒MIA,因为它针对特定的受害者样本,并且与一般中毒不同,对模型性能的影响最小。因此,我们提出了一种新的客户端防御机制,称为LoDen,它利用客户端的独特能力来检测任何可疑的隐私攻击。我们从理论上量化了泄漏到中毒MIA的成员信息,并提供了LoDen中泄漏的约束。我们对中毒MIA的四个基准数据集进行了广泛的实验评估,将LoDen与六个最先进的服务器端agr进行了比较。在所有情况下,LoDen在检测中毒MIA方面始终能够达到缺陷率,并在大多数情况下将中毒MIA的成功率降低到。LoDen的代码可在https://github.com/UQ-Trust-Lab/LoDen上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信