基于成本高效洗牌和重组的联邦学习防御

Shun-Meng Huang, Yu-Wen Chen, Jian-Jhih Kuo
{"title":"基于成本高效洗牌和重组的联邦学习防御","authors":"Shun-Meng Huang, Yu-Wen Chen, Jian-Jhih Kuo","doi":"10.1109/GLOBECOM46510.2021.9685499","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) enables multiple user de-vices to collaboratively train a global machine learning (ML) model by uploading their local models to the central server for aggregation. However, attackers may upload tampered local models (e.g., label-flipping attack) to corrupt the global model. Existing defense methods focus on outlier detection, but they are computationally intensive and can be circumvented by advanced model tampering. We employ a shuffling-based defense model to isolate the attackers from ordinary users. To explore the intrinsic properties, we simplify the defense model problem and formulate it as a Markov Decision Problem (MDP) to find the optimal policy. Then, we introduce a novel notion, (re)grouping, into the defense model to propose a new cost-efficient defense framework termed SAGE. Experiment results manifest that SAGE can effectively mitigate the impact of attacks in FL by efficiently decreasing the ratio of attacker devices to ordinary user devices. SAGE increases the testing accuracy of the targeted class by at most 40%.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Cost-Efficient Shuffling and Regrouping Based Defense for Federated Learning\",\"authors\":\"Shun-Meng Huang, Yu-Wen Chen, Jian-Jhih Kuo\",\"doi\":\"10.1109/GLOBECOM46510.2021.9685499\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning (FL) enables multiple user de-vices to collaboratively train a global machine learning (ML) model by uploading their local models to the central server for aggregation. However, attackers may upload tampered local models (e.g., label-flipping attack) to corrupt the global model. Existing defense methods focus on outlier detection, but they are computationally intensive and can be circumvented by advanced model tampering. We employ a shuffling-based defense model to isolate the attackers from ordinary users. To explore the intrinsic properties, we simplify the defense model problem and formulate it as a Markov Decision Problem (MDP) to find the optimal policy. Then, we introduce a novel notion, (re)grouping, into the defense model to propose a new cost-efficient defense framework termed SAGE. Experiment results manifest that SAGE can effectively mitigate the impact of attacks in FL by efficiently decreasing the ratio of attacker devices to ordinary user devices. SAGE increases the testing accuracy of the targeted class by at most 40%.\",\"PeriodicalId\":200641,\"journal\":{\"name\":\"2021 IEEE Global Communications Conference (GLOBECOM)\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Global Communications Conference (GLOBECOM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/GLOBECOM46510.2021.9685499\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Global Communications Conference (GLOBECOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GLOBECOM46510.2021.9685499","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

联邦学习(FL)允许多个用户设备通过将本地模型上传到中央服务器进行聚合,从而协作训练全局机器学习(ML)模型。然而,攻击者可能会上传篡改的本地模型(例如,标签翻转攻击)来破坏全局模型。现有的防御方法侧重于异常值检测,但计算量大,可以通过高级模型篡改来规避。我们采用基于洗牌的防御模型将攻击者与普通用户隔离开来。为了探究其内在性质,我们将防御模型问题简化为马尔可夫决策问题(MDP)来寻找最优策略。然后,我们在防御模型中引入了一个新的概念,即(重新)分组,提出了一个新的成本效益的防御框架,称为SAGE。实验结果表明,通过有效降低攻击设备与普通用户设备的比例,SAGE可以有效地减轻FL中攻击的影响。SAGE将目标类的测试准确性最多提高了40%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Cost-Efficient Shuffling and Regrouping Based Defense for Federated Learning
Federated learning (FL) enables multiple user de-vices to collaboratively train a global machine learning (ML) model by uploading their local models to the central server for aggregation. However, attackers may upload tampered local models (e.g., label-flipping attack) to corrupt the global model. Existing defense methods focus on outlier detection, but they are computationally intensive and can be circumvented by advanced model tampering. We employ a shuffling-based defense model to isolate the attackers from ordinary users. To explore the intrinsic properties, we simplify the defense model problem and formulate it as a Markov Decision Problem (MDP) to find the optimal policy. Then, we introduce a novel notion, (re)grouping, into the defense model to propose a new cost-efficient defense framework termed SAGE. Experiment results manifest that SAGE can effectively mitigate the impact of attacks in FL by efficiently decreasing the ratio of attacker devices to ordinary user devices. SAGE increases the testing accuracy of the targeted class by at most 40%.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信