{"title":"Hidden Vulnerabilities in Cosine Similarity based Poisoning Defense","authors":"Harsh Kasyap, S. Tripathy","doi":"10.1109/CISS53076.2022.9751167","DOIUrl":null,"url":null,"abstract":"Federated learning is a collaborative learning paradigm that deploys the model to the edge for training over the local data of the participants under the supervision of a trusted server. Despite the fact that this paradigm guarantees privacy, it is vulnerable to poisoning. Malicious participants alter their locally maintained data or model to publish an insidious update, to reduce the accuracy of the global model. Recent byzantine-robust (euclidean or cosine-similarity) based aggregation techniques, claim to protect against data poisoning attacks. On the other hand, model poisoning attacks are more insidious and adaptable to current defenses. Though different local model poisoning attacks are proposed to attack euclidean based defenses, we could not find any work to investigate cosine-similarity based defenses. We examine such defenses (FLTrust and FoolsGold) and find their underlying issues. We also demonstrate an efficient layer replacement attack that is adaptable to FLTrust, impacting to lower the accuracy up to 10%. Further, we propose a cosine-similarity based local model poisoning attack (CSA) on FLTrust and FoolsGold, which generates diverse and poisonous client updates. The later attack maintains a high trust score and a high averaged weighted score for respective defenses. Experiments are carried out on different datasets, with varying attack capabilities and settings, to study the effectiveness of the proposed attack. The results show that the test loss is increased by 10 - 20×.","PeriodicalId":305918,"journal":{"name":"2022 56th Annual Conference on Information Sciences and Systems (CISS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 56th Annual Conference on Information Sciences and Systems (CISS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CISS53076.2022.9751167","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Federated learning is a collaborative learning paradigm that deploys the model to the edge for training over the local data of the participants under the supervision of a trusted server. Despite the fact that this paradigm guarantees privacy, it is vulnerable to poisoning. Malicious participants alter their locally maintained data or model to publish an insidious update, to reduce the accuracy of the global model. Recent byzantine-robust (euclidean or cosine-similarity) based aggregation techniques, claim to protect against data poisoning attacks. On the other hand, model poisoning attacks are more insidious and adaptable to current defenses. Though different local model poisoning attacks are proposed to attack euclidean based defenses, we could not find any work to investigate cosine-similarity based defenses. We examine such defenses (FLTrust and FoolsGold) and find their underlying issues. We also demonstrate an efficient layer replacement attack that is adaptable to FLTrust, impacting to lower the accuracy up to 10%. Further, we propose a cosine-similarity based local model poisoning attack (CSA) on FLTrust and FoolsGold, which generates diverse and poisonous client updates. The later attack maintains a high trust score and a high averaged weighted score for respective defenses. Experiments are carried out on different datasets, with varying attack capabilities and settings, to study the effectiveness of the proposed attack. The results show that the test loss is increased by 10 - 20×.