基于条件变分自编码器的联邦学习检测恶意模型更新

Zhipin Gu, Yuexiang Yang
{"title":"基于条件变分自编码器的联邦学习检测恶意模型更新","authors":"Zhipin Gu, Yuexiang Yang","doi":"10.1109/IPDPS49936.2021.00075","DOIUrl":null,"url":null,"abstract":"In federated learning, the central server combines local model updates from the clients in the network to create an aggregated model. To protect clients’ privacy, the server is designed to have no visibility into how these updates are generated. The nature of federated learning makes detecting and defending against malicious model updates a challenging task. Unlike existing works that struggle to defend against Byzantine clients, the paper considers defending against targeted model poisoning attack in the federated learning setting. The adversary aims to reduce the model performance on targeted subtasks while maintaining the main task’s performance. This paper proposes Fedcvae, a robust and unsupervised federated learning framework where the central server uses conditional variational autoencoder to detect and exclude malicious model updates. Since the reconstruction error of malicious updates is much larger than that of benign ones, it can be used as an anomaly score. We formulate a dynamic threshold of reconstruction error to differentiate malicious updates from normal ones based on this idea. Fedcvae is tested with extensive experiments on IID and non-IID federated benchmarks, showing a competitive performance over existing aggregation methods under Byzantine attack and targeted model poisoning attack.","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":"{\"title\":\"Detecting Malicious Model Updates from Federated Learning on Conditional Variational Autoencoder\",\"authors\":\"Zhipin Gu, Yuexiang Yang\",\"doi\":\"10.1109/IPDPS49936.2021.00075\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In federated learning, the central server combines local model updates from the clients in the network to create an aggregated model. To protect clients’ privacy, the server is designed to have no visibility into how these updates are generated. The nature of federated learning makes detecting and defending against malicious model updates a challenging task. Unlike existing works that struggle to defend against Byzantine clients, the paper considers defending against targeted model poisoning attack in the federated learning setting. The adversary aims to reduce the model performance on targeted subtasks while maintaining the main task’s performance. This paper proposes Fedcvae, a robust and unsupervised federated learning framework where the central server uses conditional variational autoencoder to detect and exclude malicious model updates. Since the reconstruction error of malicious updates is much larger than that of benign ones, it can be used as an anomaly score. We formulate a dynamic threshold of reconstruction error to differentiate malicious updates from normal ones based on this idea. Fedcvae is tested with extensive experiments on IID and non-IID federated benchmarks, showing a competitive performance over existing aggregation methods under Byzantine attack and targeted model poisoning attack.\",\"PeriodicalId\":372234,\"journal\":{\"name\":\"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"19\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IPDPS49936.2021.00075\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPDPS49936.2021.00075","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 19

摘要

在联邦学习中,中央服务器结合来自网络中客户端的本地模型更新来创建聚合模型。为了保护客户端的隐私,服务器被设计成不可见这些更新是如何生成的。联邦学习的性质使得检测和防御恶意模型更新成为一项具有挑战性的任务。与现有的努力防御拜占庭客户端不同,本文考虑在联邦学习设置中防御目标模型中毒攻击。攻击者的目标是在保持主任务性能的同时降低目标子任务的模型性能。本文提出了一种鲁棒的无监督联邦学习框架Fedcvae,其中中央服务器使用条件变分自编码器来检测和排除恶意模型更新。由于恶意更新的重构误差比良性更新的重构误差大得多,因此可以作为异常评分。在此基础上,提出了重构错误的动态阈值来区分恶意更新和正常更新。Fedcvae在IID和非IID联邦基准上进行了大量实验,在拜占庭攻击和目标模型中毒攻击下,显示出比现有聚合方法更具竞争力的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Detecting Malicious Model Updates from Federated Learning on Conditional Variational Autoencoder
In federated learning, the central server combines local model updates from the clients in the network to create an aggregated model. To protect clients’ privacy, the server is designed to have no visibility into how these updates are generated. The nature of federated learning makes detecting and defending against malicious model updates a challenging task. Unlike existing works that struggle to defend against Byzantine clients, the paper considers defending against targeted model poisoning attack in the federated learning setting. The adversary aims to reduce the model performance on targeted subtasks while maintaining the main task’s performance. This paper proposes Fedcvae, a robust and unsupervised federated learning framework where the central server uses conditional variational autoencoder to detect and exclude malicious model updates. Since the reconstruction error of malicious updates is much larger than that of benign ones, it can be used as an anomaly score. We formulate a dynamic threshold of reconstruction error to differentiate malicious updates from normal ones based on this idea. Fedcvae is tested with extensive experiments on IID and non-IID federated benchmarks, showing a competitive performance over existing aggregation methods under Byzantine attack and targeted model poisoning attack.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信