ROFL:鲁棒隐私保护联邦学习

Nandish Chattopadhyay, Arpita Singh, A. Chattopadhyay
{"title":"ROFL:鲁棒隐私保护联邦学习","authors":"Nandish Chattopadhyay, Arpita Singh, A. Chattopadhyay","doi":"10.1109/ICDCSW56584.2022.00033","DOIUrl":null,"url":null,"abstract":"In the modern world of connectivity, most data is generated in a de centralised way, across a multitude of platforms like mobile devices and other loT applications. This crowd sourced data, if well analyzed, can prove to be rich in insights, for different tasks. However, the issue in utilizing it lies with the consolidation of the data, which is unacceptable to most involved parties. While every participant stands to benefit from the collective use of the massive data repositories, the lack of trust between them prevents that endeavour. In this paper, we propose ROFL, which is an end-to-end robust mechanism of learning, that has been developed keeping all the trust issues in mind and addressing the necessity of privacy. We make note of the threat models that might make the participants apprehensive and design a bi-directional two-dimensional privacy preserving framework, that builds upon the state-of-the-art in differentially private federated learning. Specifically, we propose a weighted federated averaging technique for aggregation of the differentially private models generated by the participants. We are able to provide privacy guarantees without compromising on the accuracy of the machine learning task. ROFL has been tested for multiple neural architectures (VGG-16 [1] and ResNet [2]) on multiple datasets (MNIST [3], CIFAR-I0 and CIFAR-I00 [4]). On the machine learning tasks, it is able to achieve accuracies within the range of 1 % -2 % of what a model trained on the collected data would have generated, in the average case scenario. We have verified the robustness of ROFL against attacks involving sabotaging or malicious client providing erroneous models. The study on model convergence reveals how to improve the efficiency of ROFL. We also provide evidence on how ROFL is easily scalable in nature.","PeriodicalId":357138,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems Workshops (ICDCSW)","volume":"138 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ROFL: RObust privacy preserving Federated Learning\",\"authors\":\"Nandish Chattopadhyay, Arpita Singh, A. Chattopadhyay\",\"doi\":\"10.1109/ICDCSW56584.2022.00033\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the modern world of connectivity, most data is generated in a de centralised way, across a multitude of platforms like mobile devices and other loT applications. This crowd sourced data, if well analyzed, can prove to be rich in insights, for different tasks. However, the issue in utilizing it lies with the consolidation of the data, which is unacceptable to most involved parties. While every participant stands to benefit from the collective use of the massive data repositories, the lack of trust between them prevents that endeavour. In this paper, we propose ROFL, which is an end-to-end robust mechanism of learning, that has been developed keeping all the trust issues in mind and addressing the necessity of privacy. We make note of the threat models that might make the participants apprehensive and design a bi-directional two-dimensional privacy preserving framework, that builds upon the state-of-the-art in differentially private federated learning. Specifically, we propose a weighted federated averaging technique for aggregation of the differentially private models generated by the participants. We are able to provide privacy guarantees without compromising on the accuracy of the machine learning task. ROFL has been tested for multiple neural architectures (VGG-16 [1] and ResNet [2]) on multiple datasets (MNIST [3], CIFAR-I0 and CIFAR-I00 [4]). On the machine learning tasks, it is able to achieve accuracies within the range of 1 % -2 % of what a model trained on the collected data would have generated, in the average case scenario. We have verified the robustness of ROFL against attacks involving sabotaging or malicious client providing erroneous models. The study on model convergence reveals how to improve the efficiency of ROFL. We also provide evidence on how ROFL is easily scalable in nature.\",\"PeriodicalId\":357138,\"journal\":{\"name\":\"2022 IEEE 42nd International Conference on Distributed Computing Systems Workshops (ICDCSW)\",\"volume\":\"138 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 42nd International Conference on Distributed Computing Systems Workshops (ICDCSW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDCSW56584.2022.00033\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 42nd International Conference on Distributed Computing Systems Workshops (ICDCSW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDCSW56584.2022.00033","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在连接的现代世界中,大多数数据是通过移动设备和其他loT应用程序等众多平台以分散式方式生成的。如果分析得好,这种众包数据可以为不同的任务提供丰富的见解。然而,利用它的问题在于数据的整合,这是大多数相关方无法接受的。虽然每个参与者都可以从大规模数据存储库的集体使用中受益,但他们之间缺乏信任阻碍了这种努力。在本文中,我们提出了ROFL,这是一种端到端健壮的学习机制,它的开发考虑了所有信任问题并解决了隐私的必要性。我们注意到可能使参与者担忧的威胁模型,并设计了一个双向二维隐私保护框架,该框架建立在最先进的差异隐私联邦学习的基础上。具体而言,我们提出了一种加权联邦平均技术,用于聚合参与者生成的差异私有模型。我们能够在不影响机器学习任务准确性的情况下提供隐私保证。ROFL已经在多个数据集(MNIST [3], CIFAR-I0和CIFAR-I00[4])上对多个神经网络架构(VGG-16[1]和ResNet[2])进行了测试。在机器学习任务中,在平均情况下,它能够达到在收集数据上训练的模型所产生的准确率的1% - 2%范围内。我们已经验证了ROFL对涉及破坏或恶意客户端提供错误模型的攻击的鲁棒性。对模型收敛性的研究揭示了如何提高ROFL的效率。我们还提供了关于ROFL如何在本质上易于扩展的证据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
ROFL: RObust privacy preserving Federated Learning
In the modern world of connectivity, most data is generated in a de centralised way, across a multitude of platforms like mobile devices and other loT applications. This crowd sourced data, if well analyzed, can prove to be rich in insights, for different tasks. However, the issue in utilizing it lies with the consolidation of the data, which is unacceptable to most involved parties. While every participant stands to benefit from the collective use of the massive data repositories, the lack of trust between them prevents that endeavour. In this paper, we propose ROFL, which is an end-to-end robust mechanism of learning, that has been developed keeping all the trust issues in mind and addressing the necessity of privacy. We make note of the threat models that might make the participants apprehensive and design a bi-directional two-dimensional privacy preserving framework, that builds upon the state-of-the-art in differentially private federated learning. Specifically, we propose a weighted federated averaging technique for aggregation of the differentially private models generated by the participants. We are able to provide privacy guarantees without compromising on the accuracy of the machine learning task. ROFL has been tested for multiple neural architectures (VGG-16 [1] and ResNet [2]) on multiple datasets (MNIST [3], CIFAR-I0 and CIFAR-I00 [4]). On the machine learning tasks, it is able to achieve accuracies within the range of 1 % -2 % of what a model trained on the collected data would have generated, in the average case scenario. We have verified the robustness of ROFL against attacks involving sabotaging or malicious client providing erroneous models. The study on model convergence reveals how to improve the efficiency of ROFL. We also provide evidence on how ROFL is easily scalable in nature.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信