{"title":"MLGuard:在保护隐私的分布式协作学习中减轻中毒攻击","authors":"Youssef Khazbak, Tianxiang Tan, G. Cao","doi":"10.1109/ICCCN49398.2020.9209670","DOIUrl":null,"url":null,"abstract":"Distributed collaborative learning has enabled building machine learning models from distributed mobile users’ data. It allows the server and users to collaboratively train a learning model where users only share model parameters with the server. To protect privacy, the server can use secure multiparty computation to learn the global model without revealing users’ parameter updates in the clear. However this privacy preserving distributed learning opens the door to poisoning attacks, where malicious users poison their training data to maliciously influence the behavior of the global model. In this paper, we propose MLGuard, a privacy preserving distributed collaborative learning system with poisoning attack mitigation. MLGuard employs lightweight secret sharing scheme and a novel poisoning attack mitigation technique. We address several challenges such as preserving users’ privacy, mitigating poisoning attacks, respecting resource constraints of mobile devices, and scaling to large number of users. Evaluation results demonstrate the effectiveness of MLGuard on building high accurate learning models with the existence of malicious users, while imposing minimal communication cost on mobile devices.","PeriodicalId":137835,"journal":{"name":"2020 29th International Conference on Computer Communications and Networks (ICCCN)","volume":"30 5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"29","resultStr":"{\"title\":\"MLGuard: Mitigating Poisoning Attacks in Privacy Preserving Distributed Collaborative Learning\",\"authors\":\"Youssef Khazbak, Tianxiang Tan, G. Cao\",\"doi\":\"10.1109/ICCCN49398.2020.9209670\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Distributed collaborative learning has enabled building machine learning models from distributed mobile users’ data. It allows the server and users to collaboratively train a learning model where users only share model parameters with the server. To protect privacy, the server can use secure multiparty computation to learn the global model without revealing users’ parameter updates in the clear. However this privacy preserving distributed learning opens the door to poisoning attacks, where malicious users poison their training data to maliciously influence the behavior of the global model. In this paper, we propose MLGuard, a privacy preserving distributed collaborative learning system with poisoning attack mitigation. MLGuard employs lightweight secret sharing scheme and a novel poisoning attack mitigation technique. We address several challenges such as preserving users’ privacy, mitigating poisoning attacks, respecting resource constraints of mobile devices, and scaling to large number of users. Evaluation results demonstrate the effectiveness of MLGuard on building high accurate learning models with the existence of malicious users, while imposing minimal communication cost on mobile devices.\",\"PeriodicalId\":137835,\"journal\":{\"name\":\"2020 29th International Conference on Computer Communications and Networks (ICCCN)\",\"volume\":\"30 5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"29\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 29th International Conference on Computer Communications and Networks (ICCCN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCCN49398.2020.9209670\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 29th International Conference on Computer Communications and Networks (ICCCN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCN49398.2020.9209670","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
MLGuard: Mitigating Poisoning Attacks in Privacy Preserving Distributed Collaborative Learning
Distributed collaborative learning has enabled building machine learning models from distributed mobile users’ data. It allows the server and users to collaboratively train a learning model where users only share model parameters with the server. To protect privacy, the server can use secure multiparty computation to learn the global model without revealing users’ parameter updates in the clear. However this privacy preserving distributed learning opens the door to poisoning attacks, where malicious users poison their training data to maliciously influence the behavior of the global model. In this paper, we propose MLGuard, a privacy preserving distributed collaborative learning system with poisoning attack mitigation. MLGuard employs lightweight secret sharing scheme and a novel poisoning attack mitigation technique. We address several challenges such as preserving users’ privacy, mitigating poisoning attacks, respecting resource constraints of mobile devices, and scaling to large number of users. Evaluation results demonstrate the effectiveness of MLGuard on building high accurate learning models with the existence of malicious users, while imposing minimal communication cost on mobile devices.