Poisoning attack detection using client historical similarity in non-iid environments

Xintong You, Zhengqi Liu, Xu Yang, Xuyang Ding
{"title":"Poisoning attack detection using client historical similarity in non-iid environments","authors":"Xintong You, Zhengqi Liu, Xu Yang, Xuyang Ding","doi":"10.1109/Confluence52989.2022.9734158","DOIUrl":null,"url":null,"abstract":"Federated learning has drawn widespread attention as privacy-preserving solution, which has a protective effect on data security and privacy. It has unique distributed machine learning mechanism, namely model sharing instead of data sharing. However, the mechanism also leads to the fact that malicious clients can easily train local model based on poisoned data and upload it to the server for contaminating the global model, thus severely hampering the development of federated learning. In this paper, we build a federated learning system and simulate heterogeneous data on each client for training. Although we cannot directly differentiate malicious customers by the uploaded model in a heterogeneous data environment, by experiments we found some features that are used to distinguish malicious customers from benign customers during training. Given above, we propose a federated learning poisoning attack detection method for detecting malicious clients and ensuring aggregation quality. The method can filter out anomaly models by comparing the similarity of the historical changes of clients and gradually identifying attacker clients through reputation mechanism. We experimentally demonstrate that the method significantly improves the performance of the global model even when the proportion of malicious clients is as high as one-third.","PeriodicalId":261941,"journal":{"name":"2022 12th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":"132 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 12th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/Confluence52989.2022.9734158","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Federated learning has drawn widespread attention as privacy-preserving solution, which has a protective effect on data security and privacy. It has unique distributed machine learning mechanism, namely model sharing instead of data sharing. However, the mechanism also leads to the fact that malicious clients can easily train local model based on poisoned data and upload it to the server for contaminating the global model, thus severely hampering the development of federated learning. In this paper, we build a federated learning system and simulate heterogeneous data on each client for training. Although we cannot directly differentiate malicious customers by the uploaded model in a heterogeneous data environment, by experiments we found some features that are used to distinguish malicious customers from benign customers during training. Given above, we propose a federated learning poisoning attack detection method for detecting malicious clients and ensuring aggregation quality. The method can filter out anomaly models by comparing the similarity of the historical changes of clients and gradually identifying attacker clients through reputation mechanism. We experimentally demonstrate that the method significantly improves the performance of the global model even when the proportion of malicious clients is as high as one-third.
在非id环境中使用客户端历史相似性进行投毒攻击检测
联邦学习作为一种保护隐私的解决方案,对数据安全和隐私具有保护作用,受到了广泛的关注。它具有独特的分布式机器学习机制,即模型共享而不是数据共享。然而,该机制也导致恶意客户端很容易基于中毒数据训练局部模型并将其上传到服务器以污染全局模型,从而严重阻碍了联邦学习的发展。在本文中,我们建立了一个联邦学习系统,并在每个客户端上模拟异构数据进行训练。虽然在异构数据环境下,我们无法通过上传的模型直接区分恶意客户,但通过实验,我们发现了一些在训练过程中用于区分恶意客户和良性客户的特征。在此基础上,我们提出了一种联邦学习中毒攻击检测方法,用于检测恶意客户端并保证聚合质量。该方法通过比较客户端历史变化的相似度来过滤异常模型,并通过信誉机制逐步识别攻击者客户端。实验证明,当恶意客户端的比例高达三分之一时,该方法显著提高了全局模型的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信