MLGuard:在保护隐私的分布式协作学习中减轻中毒攻击

Youssef Khazbak, Tianxiang Tan, G. Cao
{"title":"MLGuard:在保护隐私的分布式协作学习中减轻中毒攻击","authors":"Youssef Khazbak, Tianxiang Tan, G. Cao","doi":"10.1109/ICCCN49398.2020.9209670","DOIUrl":null,"url":null,"abstract":"Distributed collaborative learning has enabled building machine learning models from distributed mobile users’ data. It allows the server and users to collaboratively train a learning model where users only share model parameters with the server. To protect privacy, the server can use secure multiparty computation to learn the global model without revealing users’ parameter updates in the clear. However this privacy preserving distributed learning opens the door to poisoning attacks, where malicious users poison their training data to maliciously influence the behavior of the global model. In this paper, we propose MLGuard, a privacy preserving distributed collaborative learning system with poisoning attack mitigation. MLGuard employs lightweight secret sharing scheme and a novel poisoning attack mitigation technique. We address several challenges such as preserving users’ privacy, mitigating poisoning attacks, respecting resource constraints of mobile devices, and scaling to large number of users. Evaluation results demonstrate the effectiveness of MLGuard on building high accurate learning models with the existence of malicious users, while imposing minimal communication cost on mobile devices.","PeriodicalId":137835,"journal":{"name":"2020 29th International Conference on Computer Communications and Networks (ICCCN)","volume":"30 5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"29","resultStr":"{\"title\":\"MLGuard: Mitigating Poisoning Attacks in Privacy Preserving Distributed Collaborative Learning\",\"authors\":\"Youssef Khazbak, Tianxiang Tan, G. Cao\",\"doi\":\"10.1109/ICCCN49398.2020.9209670\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Distributed collaborative learning has enabled building machine learning models from distributed mobile users’ data. It allows the server and users to collaboratively train a learning model where users only share model parameters with the server. To protect privacy, the server can use secure multiparty computation to learn the global model without revealing users’ parameter updates in the clear. However this privacy preserving distributed learning opens the door to poisoning attacks, where malicious users poison their training data to maliciously influence the behavior of the global model. In this paper, we propose MLGuard, a privacy preserving distributed collaborative learning system with poisoning attack mitigation. MLGuard employs lightweight secret sharing scheme and a novel poisoning attack mitigation technique. We address several challenges such as preserving users’ privacy, mitigating poisoning attacks, respecting resource constraints of mobile devices, and scaling to large number of users. Evaluation results demonstrate the effectiveness of MLGuard on building high accurate learning models with the existence of malicious users, while imposing minimal communication cost on mobile devices.\",\"PeriodicalId\":137835,\"journal\":{\"name\":\"2020 29th International Conference on Computer Communications and Networks (ICCCN)\",\"volume\":\"30 5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"29\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 29th International Conference on Computer Communications and Networks (ICCCN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCCN49398.2020.9209670\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 29th International Conference on Computer Communications and Networks (ICCCN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCN49398.2020.9209670","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 29

摘要

分布式协作学习使得从分布式移动用户数据构建机器学习模型成为可能。它允许服务器和用户协作训练一个学习模型,其中用户只与服务器共享模型参数。为了保护隐私,服务器可以在不公开用户参数更新的情况下,使用安全的多方计算来学习全局模型。然而,这种保护隐私的分布式学习为中毒攻击打开了大门,恶意用户会毒害他们的训练数据,以恶意影响全局模型的行为。本文提出了一种保护隐私的分布式协同学习系统MLGuard。MLGuard采用轻量级秘密共享方案和新型投毒攻击缓解技术。我们解决了几个挑战,如保护用户隐私,减轻中毒攻击,尊重移动设备的资源限制,以及扩展到大量用户。评估结果表明,在存在恶意用户的情况下,MLGuard可以有效地建立高精度的学习模型,同时在移动设备上施加最小的通信成本。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
MLGuard: Mitigating Poisoning Attacks in Privacy Preserving Distributed Collaborative Learning
Distributed collaborative learning has enabled building machine learning models from distributed mobile users’ data. It allows the server and users to collaboratively train a learning model where users only share model parameters with the server. To protect privacy, the server can use secure multiparty computation to learn the global model without revealing users’ parameter updates in the clear. However this privacy preserving distributed learning opens the door to poisoning attacks, where malicious users poison their training data to maliciously influence the behavior of the global model. In this paper, we propose MLGuard, a privacy preserving distributed collaborative learning system with poisoning attack mitigation. MLGuard employs lightweight secret sharing scheme and a novel poisoning attack mitigation technique. We address several challenges such as preserving users’ privacy, mitigating poisoning attacks, respecting resource constraints of mobile devices, and scaling to large number of users. Evaluation results demonstrate the effectiveness of MLGuard on building high accurate learning models with the existence of malicious users, while imposing minimal communication cost on mobile devices.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信