Priyesh Ranjan, Ashish Gupta, Federico Coró, Sajal Kumar Das
{"title":"针对后门攻击者的鲁棒联邦学习","authors":"Priyesh Ranjan, Ashish Gupta, Federico Coró, Sajal Kumar Das","doi":"10.1109/INFOCOMWKSHPS57453.2023.10225922","DOIUrl":null,"url":null,"abstract":"Federated learning is a privacy-preserving alter-native for distributed learning with no involvement of data transfer. As the server does not have any control on clients' actions, some adversaries may participate in learning to introduce corruption into the underlying model. Backdoor attacker is one such adversary who injects a trigger pattern into the data to manipulate the model outcomes on a specific sub-task. This work aims to identify backdoor attackers and to mitigate their effects by isolating their weight updates. Leveraging the correlation between clients' gradients, we propose two graph theoretic algorithms to separate out attackers from the benign clients. Under a classification task, the experimental results show that our algorithms are effective and robust to the attackers who add backdoor trigger patterns at different location in targeted images. The results also evident that our algorithms are superior than existing methods especially when numbers of attackers are more than the normal clients.","PeriodicalId":354290,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Robust Federated Learning against Backdoor Attackers\",\"authors\":\"Priyesh Ranjan, Ashish Gupta, Federico Coró, Sajal Kumar Das\",\"doi\":\"10.1109/INFOCOMWKSHPS57453.2023.10225922\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning is a privacy-preserving alter-native for distributed learning with no involvement of data transfer. As the server does not have any control on clients' actions, some adversaries may participate in learning to introduce corruption into the underlying model. Backdoor attacker is one such adversary who injects a trigger pattern into the data to manipulate the model outcomes on a specific sub-task. This work aims to identify backdoor attackers and to mitigate their effects by isolating their weight updates. Leveraging the correlation between clients' gradients, we propose two graph theoretic algorithms to separate out attackers from the benign clients. Under a classification task, the experimental results show that our algorithms are effective and robust to the attackers who add backdoor trigger patterns at different location in targeted images. The results also evident that our algorithms are superior than existing methods especially when numbers of attackers are more than the normal clients.\",\"PeriodicalId\":354290,\"journal\":{\"name\":\"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/INFOCOMWKSHPS57453.2023.10225922\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOCOMWKSHPS57453.2023.10225922","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Robust Federated Learning against Backdoor Attackers
Federated learning is a privacy-preserving alter-native for distributed learning with no involvement of data transfer. As the server does not have any control on clients' actions, some adversaries may participate in learning to introduce corruption into the underlying model. Backdoor attacker is one such adversary who injects a trigger pattern into the data to manipulate the model outcomes on a specific sub-task. This work aims to identify backdoor attackers and to mitigate their effects by isolating their weight updates. Leveraging the correlation between clients' gradients, we propose two graph theoretic algorithms to separate out attackers from the benign clients. Under a classification task, the experimental results show that our algorithms are effective and robust to the attackers who add backdoor trigger patterns at different location in targeted images. The results also evident that our algorithms are superior than existing methods especially when numbers of attackers are more than the normal clients.