Xueluan Gong, Yanjiao Chen, Qian Wang, Weihan Kong
{"title":"联合学习中的后门攻击和防御:现状、分类和未来方向","authors":"Xueluan Gong, Yanjiao Chen, Qian Wang, Weihan Kong","doi":"10.1109/MWC.017.2100714","DOIUrl":null,"url":null,"abstract":"The federated learning framework is designed for massively distributed training of deep learning models among thousands of participants without compromising the privacy of their training datasets. The training dataset across participants usually has heterogeneous data distributions. Besides, the central server aggregates the updates provided by different parties, but has no visibility into how such updates are created. The inherent characteristics of federated learning may incur a severe security concern. The malicious participants can upload poisoned updates to introduce backdoored functionality into the global model, in which the backdoored global model will misclassify all the malicious images (i.e., attached with the backdoor trigger) into a false label but will behave normally in the absence of the backdoor trigger. In this work, we present a comprehensive review of the state-of-the-art backdoor attacks and defenses in federated learning. We classify the existing backdoor attacks into two categories: data poisoning attacks and model poisoning attacks, and divide the defenses into anomaly updates detection, robust federated training, and backdoored model restoration. We give a detailed comparison of both attacks and defenses through experiments. Lastly, we pinpoint a variety of potential future directions of both backdoor attacks and defenses in the framework of federated learning.","PeriodicalId":13342,"journal":{"name":"IEEE Wireless Communications","volume":"30 1","pages":"114-121"},"PeriodicalIF":10.9000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":"{\"title\":\"Backdoor Attacks and Defenses in Federated Learning: State-of-the-Art, Taxonomy, and Future Directions\",\"authors\":\"Xueluan Gong, Yanjiao Chen, Qian Wang, Weihan Kong\",\"doi\":\"10.1109/MWC.017.2100714\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The federated learning framework is designed for massively distributed training of deep learning models among thousands of participants without compromising the privacy of their training datasets. The training dataset across participants usually has heterogeneous data distributions. Besides, the central server aggregates the updates provided by different parties, but has no visibility into how such updates are created. The inherent characteristics of federated learning may incur a severe security concern. The malicious participants can upload poisoned updates to introduce backdoored functionality into the global model, in which the backdoored global model will misclassify all the malicious images (i.e., attached with the backdoor trigger) into a false label but will behave normally in the absence of the backdoor trigger. In this work, we present a comprehensive review of the state-of-the-art backdoor attacks and defenses in federated learning. We classify the existing backdoor attacks into two categories: data poisoning attacks and model poisoning attacks, and divide the defenses into anomaly updates detection, robust federated training, and backdoored model restoration. We give a detailed comparison of both attacks and defenses through experiments. Lastly, we pinpoint a variety of potential future directions of both backdoor attacks and defenses in the framework of federated learning.\",\"PeriodicalId\":13342,\"journal\":{\"name\":\"IEEE Wireless Communications\",\"volume\":\"30 1\",\"pages\":\"114-121\"},\"PeriodicalIF\":10.9000,\"publicationDate\":\"2023-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"15\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Wireless Communications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1109/MWC.017.2100714\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Wireless Communications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/MWC.017.2100714","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
Backdoor Attacks and Defenses in Federated Learning: State-of-the-Art, Taxonomy, and Future Directions
The federated learning framework is designed for massively distributed training of deep learning models among thousands of participants without compromising the privacy of their training datasets. The training dataset across participants usually has heterogeneous data distributions. Besides, the central server aggregates the updates provided by different parties, but has no visibility into how such updates are created. The inherent characteristics of federated learning may incur a severe security concern. The malicious participants can upload poisoned updates to introduce backdoored functionality into the global model, in which the backdoored global model will misclassify all the malicious images (i.e., attached with the backdoor trigger) into a false label but will behave normally in the absence of the backdoor trigger. In this work, we present a comprehensive review of the state-of-the-art backdoor attacks and defenses in federated learning. We classify the existing backdoor attacks into two categories: data poisoning attacks and model poisoning attacks, and divide the defenses into anomaly updates detection, robust federated training, and backdoored model restoration. We give a detailed comparison of both attacks and defenses through experiments. Lastly, we pinpoint a variety of potential future directions of both backdoor attacks and defenses in the framework of federated learning.
期刊介绍:
IEEE Wireless Communications is tailored for professionals within the communications and networking communities. It addresses technical and policy issues associated with personalized, location-independent communications across various media and protocol layers. Encompassing both wired and wireless communications, the magazine explores the intersection of computing, the mobility of individuals, communicating devices, and personalized services.
Every issue of this interdisciplinary publication presents high-quality articles delving into the revolutionary technological advances in personal, location-independent communications, and computing. IEEE Wireless Communications provides an insightful platform for individuals engaged in these dynamic fields, offering in-depth coverage of significant developments in the realm of communication technology.