Hidden Vulnerabilities in Cosine Similarity based Poisoning Defense

Harsh Kasyap, S. Tripathy
{"title":"Hidden Vulnerabilities in Cosine Similarity based Poisoning Defense","authors":"Harsh Kasyap, S. Tripathy","doi":"10.1109/CISS53076.2022.9751167","DOIUrl":null,"url":null,"abstract":"Federated learning is a collaborative learning paradigm that deploys the model to the edge for training over the local data of the participants under the supervision of a trusted server. Despite the fact that this paradigm guarantees privacy, it is vulnerable to poisoning. Malicious participants alter their locally maintained data or model to publish an insidious update, to reduce the accuracy of the global model. Recent byzantine-robust (euclidean or cosine-similarity) based aggregation techniques, claim to protect against data poisoning attacks. On the other hand, model poisoning attacks are more insidious and adaptable to current defenses. Though different local model poisoning attacks are proposed to attack euclidean based defenses, we could not find any work to investigate cosine-similarity based defenses. We examine such defenses (FLTrust and FoolsGold) and find their underlying issues. We also demonstrate an efficient layer replacement attack that is adaptable to FLTrust, impacting to lower the accuracy up to 10%. Further, we propose a cosine-similarity based local model poisoning attack (CSA) on FLTrust and FoolsGold, which generates diverse and poisonous client updates. The later attack maintains a high trust score and a high averaged weighted score for respective defenses. Experiments are carried out on different datasets, with varying attack capabilities and settings, to study the effectiveness of the proposed attack. The results show that the test loss is increased by 10 - 20×.","PeriodicalId":305918,"journal":{"name":"2022 56th Annual Conference on Information Sciences and Systems (CISS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 56th Annual Conference on Information Sciences and Systems (CISS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CISS53076.2022.9751167","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Federated learning is a collaborative learning paradigm that deploys the model to the edge for training over the local data of the participants under the supervision of a trusted server. Despite the fact that this paradigm guarantees privacy, it is vulnerable to poisoning. Malicious participants alter their locally maintained data or model to publish an insidious update, to reduce the accuracy of the global model. Recent byzantine-robust (euclidean or cosine-similarity) based aggregation techniques, claim to protect against data poisoning attacks. On the other hand, model poisoning attacks are more insidious and adaptable to current defenses. Though different local model poisoning attacks are proposed to attack euclidean based defenses, we could not find any work to investigate cosine-similarity based defenses. We examine such defenses (FLTrust and FoolsGold) and find their underlying issues. We also demonstrate an efficient layer replacement attack that is adaptable to FLTrust, impacting to lower the accuracy up to 10%. Further, we propose a cosine-similarity based local model poisoning attack (CSA) on FLTrust and FoolsGold, which generates diverse and poisonous client updates. The later attack maintains a high trust score and a high averaged weighted score for respective defenses. Experiments are carried out on different datasets, with varying attack capabilities and settings, to study the effectiveness of the proposed attack. The results show that the test loss is increased by 10 - 20×.
基于余弦相似度的投毒防御中的隐藏漏洞
联邦学习是一种协作学习范例,它将模型部署到边缘,以便在可信服务器的监督下对参与者的本地数据进行训练。尽管这种模式保证了隐私,但它很容易受到毒害。恶意的参与者改变他们本地维护的数据或模型来发布一个阴险的更新,以降低全局模型的准确性。最近基于拜占庭鲁棒(欧几里得或余弦相似度)的聚合技术声称可以防止数据中毒攻击。另一方面,模型中毒攻击更隐蔽,更能适应当前的防御。尽管提出了不同的局部模型中毒攻击来攻击基于欧几里得的防御,但我们没有找到任何研究基于余弦相似度的防御的工作。我们检查了这些防御(FLTrust和FoolsGold),并发现了它们的潜在问题。我们还演示了一种有效的层替换攻击,该攻击适用于FLTrust,影响将准确率降低高达10%。此外,我们在FLTrust和FoolsGold上提出了一种基于余弦相似度的局部模型中毒攻击(CSA),该攻击会产生多种有毒的客户端更新。后一种攻击保持较高的信任得分和各自防御的平均加权得分。实验在不同的数据集上进行,具有不同的攻击能力和设置,以研究所提出的攻击的有效性。结果表明,试验损耗提高了10 ~ 20倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信