{"title":"SoK: Systematizing Attack Studies in Federated Learning – From Sparseness to Completeness","authors":"Geetanjli Sharma, Pathum Chamikara Mahawaga Arachchige, Mohan Baruwal Chhetri, Yi-Ping Phoebe Chen","doi":"10.1145/3579856.3590328","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) is a machine learning technique that enables multiple parties to collaboratively train a model using their private datasets. Given its decentralized nature, FL has inherent vulnerabilities that make it susceptible to adversarial attacks. The success of an attack on FL depends upon several (latent) factors, including the adversary’s strength, the chosen attack strategy, and the effectiveness of the defense measures in place. There is a growing body of literature on empirical attack studies on FL, but no systematic way to compare and evaluate the completeness of these studies, which raises questions about their validity. To address this problem, we introduce a causal model that captures the relationship between the different (latent) factors, and their reflexive indicators, that can impact the success of an attack on FL. The proposed model, inspired by structural equation modeling, helps systematize the existing literature on FL attack studies and provides a way to compare and contrast their completeness. We validate the model and demonstrate its utility through experimental evaluation of select attack studies. Our aim is to help researchers in the FL domain design more complete attack studies and improve the understanding of FL vulnerabilities.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"54 5","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3579856.3590328","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Federated Learning (FL) is a machine learning technique that enables multiple parties to collaboratively train a model using their private datasets. Given its decentralized nature, FL has inherent vulnerabilities that make it susceptible to adversarial attacks. The success of an attack on FL depends upon several (latent) factors, including the adversary’s strength, the chosen attack strategy, and the effectiveness of the defense measures in place. There is a growing body of literature on empirical attack studies on FL, but no systematic way to compare and evaluate the completeness of these studies, which raises questions about their validity. To address this problem, we introduce a causal model that captures the relationship between the different (latent) factors, and their reflexive indicators, that can impact the success of an attack on FL. The proposed model, inspired by structural equation modeling, helps systematize the existing literature on FL attack studies and provides a way to compare and contrast their completeness. We validate the model and demonstrate its utility through experimental evaluation of select attack studies. Our aim is to help researchers in the FL domain design more complete attack studies and improve the understanding of FL vulnerabilities.