{"title":"一种高效、保密和持续的联邦学习后门攻击策略","authors":"Jiarui Cao, Liehuang Zhu","doi":"10.1145/3529836.3529845","DOIUrl":null,"url":null,"abstract":"Federated learning is a kind of distributed machine learning. Researchers have conducted extensive research on federated learning's security defences and backdoor attacks. However, most studies are based on the assumption federated learning participant's data obey iid (independently identically distribution). This paper will evaluate the security issues of non-iid federated learning and propose a new attack strategy. Compared with the existing attack strategy, our approach has three innovations. The first one, we conquer foolsgold [1] defences through the attacker's negotiation. In the second one, we propose a modified gradient upload strategy for fedsgd backdoor attack, which significantly improves the backdoor attack's confidentiality on the original basis. Finally, we offer a bit Trojan method to realize continuous on non-iid federated learning. We conduct extensive experiments on different datasets to illustrate our backdoor attack strategy is highly efficient, confidential, and continuous on non-iid federated learning.","PeriodicalId":285191,"journal":{"name":"2022 14th International Conference on Machine Learning and Computing (ICMLC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"A highly efficient, confidential, and continuous federated learning backdoor attack strategy\",\"authors\":\"Jiarui Cao, Liehuang Zhu\",\"doi\":\"10.1145/3529836.3529845\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning is a kind of distributed machine learning. Researchers have conducted extensive research on federated learning's security defences and backdoor attacks. However, most studies are based on the assumption federated learning participant's data obey iid (independently identically distribution). This paper will evaluate the security issues of non-iid federated learning and propose a new attack strategy. Compared with the existing attack strategy, our approach has three innovations. The first one, we conquer foolsgold [1] defences through the attacker's negotiation. In the second one, we propose a modified gradient upload strategy for fedsgd backdoor attack, which significantly improves the backdoor attack's confidentiality on the original basis. Finally, we offer a bit Trojan method to realize continuous on non-iid federated learning. We conduct extensive experiments on different datasets to illustrate our backdoor attack strategy is highly efficient, confidential, and continuous on non-iid federated learning.\",\"PeriodicalId\":285191,\"journal\":{\"name\":\"2022 14th International Conference on Machine Learning and Computing (ICMLC)\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-02-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 14th International Conference on Machine Learning and Computing (ICMLC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3529836.3529845\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 14th International Conference on Machine Learning and Computing (ICMLC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3529836.3529845","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A highly efficient, confidential, and continuous federated learning backdoor attack strategy
Federated learning is a kind of distributed machine learning. Researchers have conducted extensive research on federated learning's security defences and backdoor attacks. However, most studies are based on the assumption federated learning participant's data obey iid (independently identically distribution). This paper will evaluate the security issues of non-iid federated learning and propose a new attack strategy. Compared with the existing attack strategy, our approach has three innovations. The first one, we conquer foolsgold [1] defences through the attacker's negotiation. In the second one, we propose a modified gradient upload strategy for fedsgd backdoor attack, which significantly improves the backdoor attack's confidentiality on the original basis. Finally, we offer a bit Trojan method to realize continuous on non-iid federated learning. We conduct extensive experiments on different datasets to illustrate our backdoor attack strategy is highly efficient, confidential, and continuous on non-iid federated learning.