{"title":"基于成本高效洗牌和重组的联邦学习防御","authors":"Shun-Meng Huang, Yu-Wen Chen, Jian-Jhih Kuo","doi":"10.1109/GLOBECOM46510.2021.9685499","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) enables multiple user de-vices to collaboratively train a global machine learning (ML) model by uploading their local models to the central server for aggregation. However, attackers may upload tampered local models (e.g., label-flipping attack) to corrupt the global model. Existing defense methods focus on outlier detection, but they are computationally intensive and can be circumvented by advanced model tampering. We employ a shuffling-based defense model to isolate the attackers from ordinary users. To explore the intrinsic properties, we simplify the defense model problem and formulate it as a Markov Decision Problem (MDP) to find the optimal policy. Then, we introduce a novel notion, (re)grouping, into the defense model to propose a new cost-efficient defense framework termed SAGE. Experiment results manifest that SAGE can effectively mitigate the impact of attacks in FL by efficiently decreasing the ratio of attacker devices to ordinary user devices. SAGE increases the testing accuracy of the targeted class by at most 40%.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Cost-Efficient Shuffling and Regrouping Based Defense for Federated Learning\",\"authors\":\"Shun-Meng Huang, Yu-Wen Chen, Jian-Jhih Kuo\",\"doi\":\"10.1109/GLOBECOM46510.2021.9685499\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning (FL) enables multiple user de-vices to collaboratively train a global machine learning (ML) model by uploading their local models to the central server for aggregation. However, attackers may upload tampered local models (e.g., label-flipping attack) to corrupt the global model. Existing defense methods focus on outlier detection, but they are computationally intensive and can be circumvented by advanced model tampering. We employ a shuffling-based defense model to isolate the attackers from ordinary users. To explore the intrinsic properties, we simplify the defense model problem and formulate it as a Markov Decision Problem (MDP) to find the optimal policy. Then, we introduce a novel notion, (re)grouping, into the defense model to propose a new cost-efficient defense framework termed SAGE. Experiment results manifest that SAGE can effectively mitigate the impact of attacks in FL by efficiently decreasing the ratio of attacker devices to ordinary user devices. SAGE increases the testing accuracy of the targeted class by at most 40%.\",\"PeriodicalId\":200641,\"journal\":{\"name\":\"2021 IEEE Global Communications Conference (GLOBECOM)\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Global Communications Conference (GLOBECOM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/GLOBECOM46510.2021.9685499\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Global Communications Conference (GLOBECOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GLOBECOM46510.2021.9685499","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Cost-Efficient Shuffling and Regrouping Based Defense for Federated Learning
Federated learning (FL) enables multiple user de-vices to collaboratively train a global machine learning (ML) model by uploading their local models to the central server for aggregation. However, attackers may upload tampered local models (e.g., label-flipping attack) to corrupt the global model. Existing defense methods focus on outlier detection, but they are computationally intensive and can be circumvented by advanced model tampering. We employ a shuffling-based defense model to isolate the attackers from ordinary users. To explore the intrinsic properties, we simplify the defense model problem and formulate it as a Markov Decision Problem (MDP) to find the optimal policy. Then, we introduce a novel notion, (re)grouping, into the defense model to propose a new cost-efficient defense framework termed SAGE. Experiment results manifest that SAGE can effectively mitigate the impact of attacks in FL by efficiently decreasing the ratio of attacker devices to ordinary user devices. SAGE increases the testing accuracy of the targeted class by at most 40%.