Going Haywire: False Friends in Federated Learning and How to Find Them

William Aiken, Paula Branco, Guy-Vincent Jourdan
{"title":"Going Haywire: False Friends in Federated Learning and How to Find Them","authors":"William Aiken, Paula Branco, Guy-Vincent Jourdan","doi":"10.1145/3579856.3595790","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) promises to offer a major paradigm shift in the way deep learning models are trained at scale, yet malicious clients can surreptitiously embed backdoors into models via trivial augmentation on their own subset of the data. This is especially true in small- and medium-scale FL systems, which consist of dozens, rather than millions, of clients. In this work, we investigate a novel attack scenario for an FL architecture consisting of multiple non-i.i.d. silos of data in which each distribution has a unique backdoor attacker and where the model convergences of adversaries are not more similar than those of benign clients. We propose a new method, dubbed Haywire, as a security-in-depth approach to respond to this novel attack scenario. Our defense utilizes a combination of kPCA dimensionality reduction of fully-connected layers in the network, KMeans anomaly detection to drop anomalous clients, and server aggregation robust to outliers via the Geometric Median. Our solution prevents the contamination of the global model despite having no access to the backdoor triggers. We evaluate the performance of Haywire from model-accuracy, defense-performance, and attack-success perspectives against multiple baselines. Through an extensive set of experiments, we find that Haywire produces the best performances at preventing backdoor attacks while simultaneously not unfairly penalizing benign clients. We carried out additional in-depth experiments across multiple runs that demonstrate the reliability of Haywire.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3579856.3595790","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning (FL) promises to offer a major paradigm shift in the way deep learning models are trained at scale, yet malicious clients can surreptitiously embed backdoors into models via trivial augmentation on their own subset of the data. This is especially true in small- and medium-scale FL systems, which consist of dozens, rather than millions, of clients. In this work, we investigate a novel attack scenario for an FL architecture consisting of multiple non-i.i.d. silos of data in which each distribution has a unique backdoor attacker and where the model convergences of adversaries are not more similar than those of benign clients. We propose a new method, dubbed Haywire, as a security-in-depth approach to respond to this novel attack scenario. Our defense utilizes a combination of kPCA dimensionality reduction of fully-connected layers in the network, KMeans anomaly detection to drop anomalous clients, and server aggregation robust to outliers via the Geometric Median. Our solution prevents the contamination of the global model despite having no access to the backdoor triggers. We evaluate the performance of Haywire from model-accuracy, defense-performance, and attack-success perspectives against multiple baselines. Through an extensive set of experiments, we find that Haywire produces the best performances at preventing backdoor attacks while simultaneously not unfairly penalizing benign clients. We carried out additional in-depth experiments across multiple runs that demonstrate the reliability of Haywire.
误入歧途:联邦学习中的假朋友以及如何找到他们
联邦学习(FL)承诺在深度学习模型的大规模训练方式上提供重大的范式转变,然而恶意的客户端可以通过对自己的数据子集进行微不足道的扩展,偷偷地将后门嵌入到模型中。这在小型和中型的FL系统中尤其如此,这些系统由数十个而不是数百万个客户端组成。在这项工作中,我们研究了由多个非id组成的FL架构的一种新的攻击场景。在数据孤岛中,每个分布都有一个独特的后门攻击者,并且攻击者的模型收敛并不比良性客户端更相似。我们提出了一种新的方法,称为Haywire,作为一种深入安全的方法来响应这种新的攻击场景。我们的防御结合了kPCA对网络中全连接层的降维,KMeans异常检测以减少异常客户端,以及通过几何中位数对异常值的服务器聚合。我们的解决方案防止了全局模型的污染,尽管无法访问后门触发器。我们从多个基线的模型准确性、防御性能和攻击成功角度评估Haywire的性能。通过一系列广泛的实验,我们发现Haywire在防止后门攻击方面表现最好,同时不会不公平地惩罚良性客户端。我们在多次运行中进行了额外的深入实验,以证明Haywire的可靠性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信