FedSlice: Protecting Federated Learning Models from Malicious Participants with Model Slicing

Ziqi Zhang, Yuanchun Li, Bingyan Liu, Yifeng Cai, Ding Li, Yao Guo, Xiangqun Chen
{"title":"FedSlice: Protecting Federated Learning Models from Malicious Participants with Model Slicing","authors":"Ziqi Zhang, Yuanchun Li, Bingyan Liu, Yifeng Cai, Ding Li, Yao Guo, Xiangqun Chen","doi":"10.1109/ICSE48619.2023.00049","DOIUrl":null,"url":null,"abstract":"Crowdsourcing Federated learning (CFL) is a new crowdsourcing development paradigm for the Deep Neural Network (DNN) models, also called “software 2.0”. In practice, the privacy of CFL can be compromised by many attacks, such as free-rider attacks, adversarial attacks, gradient leakage attacks, and inference attacks. Conventional defensive techniques have low efficiency because they deploy heavy encryption techniques or rely on Trusted Execution Environments (TEEs). To improve the efficiency of protecting CFL from these attacks, this paper proposes FedSlice to prevent malicious participants from getting the whole server-side model while keeping the performance goal of CFL. FedSlice breaks the server-side model into several slices and delivers one slice to each participant. Thus, a malicious participant can only get a subset of the server-side model, preventing them from effectively conducting effective attacks. We evaluate FedSlice against these attacks, and results show that FedSlice provides effective defense: the server-side model leakage is reduced from 100% to 43.45%, the success rate of adversarial attacks is reduced from 100% to 11.66%, the average accuracy of membership inference is reduced from 71.91% to 51.58%, and the data leakage from shared gradients is reduced to the level of random guesses. Besides, FedSlice only introduces less than 2% accuracy loss and about 14% computation overhead. To the best of our knowledge, this is the first paper to discuss defense methods against these attacks to the CFL framework.","PeriodicalId":376379,"journal":{"name":"2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSE48619.2023.00049","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Crowdsourcing Federated learning (CFL) is a new crowdsourcing development paradigm for the Deep Neural Network (DNN) models, also called “software 2.0”. In practice, the privacy of CFL can be compromised by many attacks, such as free-rider attacks, adversarial attacks, gradient leakage attacks, and inference attacks. Conventional defensive techniques have low efficiency because they deploy heavy encryption techniques or rely on Trusted Execution Environments (TEEs). To improve the efficiency of protecting CFL from these attacks, this paper proposes FedSlice to prevent malicious participants from getting the whole server-side model while keeping the performance goal of CFL. FedSlice breaks the server-side model into several slices and delivers one slice to each participant. Thus, a malicious participant can only get a subset of the server-side model, preventing them from effectively conducting effective attacks. We evaluate FedSlice against these attacks, and results show that FedSlice provides effective defense: the server-side model leakage is reduced from 100% to 43.45%, the success rate of adversarial attacks is reduced from 100% to 11.66%, the average accuracy of membership inference is reduced from 71.91% to 51.58%, and the data leakage from shared gradients is reduced to the level of random guesses. Besides, FedSlice only introduces less than 2% accuracy loss and about 14% computation overhead. To the best of our knowledge, this is the first paper to discuss defense methods against these attacks to the CFL framework.
FedSlice:利用模型切片保护联邦学习模型免受恶意参与者的攻击
众包联邦学习(CFL)是深度神经网络(DNN)模型的一种新的众包开发范式,也被称为“软件2.0”。在实践中,CFL的隐私可能会受到许多攻击的破坏,例如搭便车攻击、对抗性攻击、梯度泄漏攻击和推理攻击。传统的防御技术效率较低,因为它们部署了大量的加密技术或依赖于可信执行环境(tee)。为了提高CFL抵御这些攻击的效率,本文提出了FedSlice来防止恶意参与者获取整个服务器端模型,同时保持CFL的性能目标。FedSlice将服务器端模型分解为几个片,并将一个片交付给每个参与者。因此,恶意参与者只能获得服务器端模型的一个子集,从而阻止他们有效地进行有效攻击。针对这些攻击,我们对FedSlice进行了评估,结果表明,FedSlice提供了有效的防御:服务器端模型泄漏从100%降低到43.45%,对抗性攻击的成功率从100%降低到11.66%,隶属度推断的平均准确率从71.91%降低到51.58%,共享梯度的数据泄漏降低到随机猜测的水平。此外,FedSlice只带来不到2%的精度损失和大约14%的计算开销。据我们所知,这是第一篇讨论针对CFL框架的这些攻击的防御方法的论文。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信