On Feasibility of Server-side Backdoor Attacks on Split Learning

B. Tajalli, O. Ersoy, S. Picek
{"title":"On Feasibility of Server-side Backdoor Attacks on Split Learning","authors":"B. Tajalli, O. Ersoy, S. Picek","doi":"10.1109/SPW59333.2023.00014","DOIUrl":null,"url":null,"abstract":"Split learning is a collaborative learning design that allows several participants (clients) to train a shared model while keeping their datasets private. In split learning, the network is split into two halves: clients have the initial part until the cut layer, and the remaining part of the network is on the server side. In the training process, clients feed the data into the first part of the network and send the output (smashed data) to the server, which uses it as the input for the remaining part of the network. Recent studies demonstrate that collaborative learning models, specifically federated learning, are vulnerable to security and privacy attacks such as model inference and backdoor attacks. While there have been studies regarding inference attacks on split learning, it has not yet been tested for backdoor attacks. This paper performs a novel backdoor attack on split learning and studies its effectiveness. Despite traditional backdoor attacks done on the client side, we inject the backdoor trigger from the server side. We provide two attack methods: one using a surrogate client and another using an autoencoder to poison the model via incoming smashed data and its outgoing gradient toward the innocent participants. The results show that despite using strong patterns and injection methods, split learning is highly robust and resistant to such poisoning attacks. While we get the attack success rate of 100% as our best result for the MNIST dataset, in most of the other cases, our attack shows little success when increasing the cut layer.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Security and Privacy Workshops (SPW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SPW59333.2023.00014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Split learning is a collaborative learning design that allows several participants (clients) to train a shared model while keeping their datasets private. In split learning, the network is split into two halves: clients have the initial part until the cut layer, and the remaining part of the network is on the server side. In the training process, clients feed the data into the first part of the network and send the output (smashed data) to the server, which uses it as the input for the remaining part of the network. Recent studies demonstrate that collaborative learning models, specifically federated learning, are vulnerable to security and privacy attacks such as model inference and backdoor attacks. While there have been studies regarding inference attacks on split learning, it has not yet been tested for backdoor attacks. This paper performs a novel backdoor attack on split learning and studies its effectiveness. Despite traditional backdoor attacks done on the client side, we inject the backdoor trigger from the server side. We provide two attack methods: one using a surrogate client and another using an autoencoder to poison the model via incoming smashed data and its outgoing gradient toward the innocent participants. The results show that despite using strong patterns and injection methods, split learning is highly robust and resistant to such poisoning attacks. While we get the attack success rate of 100% as our best result for the MNIST dataset, in most of the other cases, our attack shows little success when increasing the cut layer.
基于分裂学习的服务器端后门攻击可行性研究
分割学习是一种协作学习设计,允许多个参与者(客户端)训练共享模型,同时保持他们的数据集私有。在分裂学习中,网络被分成两部分:客户端拥有初始部分,直到切割层,网络的其余部分在服务器端。在训练过程中,客户端将数据输入网络的第一部分,并将输出(粉碎数据)发送给服务器,服务器将其用作网络其余部分的输入。最近的研究表明,协作学习模型,特别是联邦学习,容易受到安全和隐私攻击,如模型推理和后门攻击。虽然已有关于分裂学习推理攻击的研究,但尚未对后门攻击进行测试。本文提出了一种针对分裂学习的后门攻击方法,并对其有效性进行了研究。尽管传统的后门攻击是在客户端进行的,但我们从服务器端注入后门触发器。我们提供了两种攻击方法:一种使用代理客户端,另一种使用自动编码器通过传入的破碎数据及其向无辜参与者的输出梯度来毒害模型。结果表明,尽管使用了强模式和注入方法,但分裂学习具有高度鲁棒性和抗此类中毒攻击的能力。虽然我们在MNIST数据集上获得了100%的攻击成功率作为我们的最佳结果,但在大多数其他情况下,当增加切割层时,我们的攻击几乎没有成功。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信