Yuchen Tian;Weizhe Zhang;Andrew Simpson;Yang Liu;Zoe Lin Jiang
{"title":"Defending Against Data Poisoning Attacks: From Distributed Learning to Federated Learning","authors":"Yuchen Tian;Weizhe Zhang;Andrew Simpson;Yang Liu;Zoe Lin Jiang","doi":"10.1093/comjnl/bxab192","DOIUrl":null,"url":null,"abstract":"Federated learning (FL), a variant of distributed learning (DL), supports the training of a shared model without accessing private data from different sources. Despite its benefits with regard to privacy preservation, FL's distributed nature and privacy constraints make it vulnerable to data poisoning attacks. Existing defenses, primarily designed for DL, are typically not well adapted to FL. In this paper, we study such attacks and defenses. In doing so, we start from the perspective of DL and then give consideration to a real-world FL scenario, with the aim being to explore the requisites of a desirable defense in FL. Our study shows that (i) the batch size used in each training round affects the effectiveness of defenses in DL, (ii) the defenses investigated are somewhat effective and moderately influenced by batch size in FL settings and (iii) the non-IID data makes it more difficult to defend against data poisoning attacks in FL. Based on the findings, we discuss the key challenges and possible directions in defending against such attacks in FL. In addition, we propose detect and suppress the potential outliers(DSPO), a defense against data poisoning attacks in FL scenarios. Our results show that DSPO outperforms other defenses in several cases.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"66 3","pages":"711-726"},"PeriodicalIF":1.5000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Journal","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10084429/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 4
Abstract
Federated learning (FL), a variant of distributed learning (DL), supports the training of a shared model without accessing private data from different sources. Despite its benefits with regard to privacy preservation, FL's distributed nature and privacy constraints make it vulnerable to data poisoning attacks. Existing defenses, primarily designed for DL, are typically not well adapted to FL. In this paper, we study such attacks and defenses. In doing so, we start from the perspective of DL and then give consideration to a real-world FL scenario, with the aim being to explore the requisites of a desirable defense in FL. Our study shows that (i) the batch size used in each training round affects the effectiveness of defenses in DL, (ii) the defenses investigated are somewhat effective and moderately influenced by batch size in FL settings and (iii) the non-IID data makes it more difficult to defend against data poisoning attacks in FL. Based on the findings, we discuss the key challenges and possible directions in defending against such attacks in FL. In addition, we propose detect and suppress the potential outliers(DSPO), a defense against data poisoning attacks in FL scenarios. Our results show that DSPO outperforms other defenses in several cases.
期刊介绍:
The Computer Journal is one of the longest-established journals serving all branches of the academic computer science community. It is currently published in four sections.