Sparse Communication for Federated Learning

Kundjanasith Thonglek, Keichi Takahashi, Koheix Ichikawa, Chawanat Nakasan, P. Leelaprute, Hajimu Iida
{"title":"Sparse Communication for Federated Learning","authors":"Kundjanasith Thonglek, Keichi Takahashi, Koheix Ichikawa, Chawanat Nakasan, P. Leelaprute, Hajimu Iida","doi":"10.1109/icfec54809.2022.00008","DOIUrl":null,"url":null,"abstract":"Federated learning trains a model on a centralized server using datasets distributed over a massive amount of edge devices. Since federated learning does not send local data from edge devices to the server, it preserves data privacy. It transfers the local models from edge devices instead of the local data. However, communication costs are frequently a problem in federated learning. This paper proposes a novel method to reduce the required communication cost for federated learning by transferring only top updated parameters in neural network models. The proposed method allows adjusting the criteria of updated parameters to trade-off the reduction of communication costs and the loss of model accuracy. We evaluated the proposed method using diverse models and datasets and found that it can achieve comparable performance to transfer original models for federated learning. As a result, the proposed method has achieved a reduction of the required communication costs around 90% when compared to the conventional method for VGG16. Furthermore, we found out that the proposed method is able to reduce the communication cost of a large model more than of a small model due to the different threshold of updated parameters in each model architecture.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icfec54809.2022.00008","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Federated learning trains a model on a centralized server using datasets distributed over a massive amount of edge devices. Since federated learning does not send local data from edge devices to the server, it preserves data privacy. It transfers the local models from edge devices instead of the local data. However, communication costs are frequently a problem in federated learning. This paper proposes a novel method to reduce the required communication cost for federated learning by transferring only top updated parameters in neural network models. The proposed method allows adjusting the criteria of updated parameters to trade-off the reduction of communication costs and the loss of model accuracy. We evaluated the proposed method using diverse models and datasets and found that it can achieve comparable performance to transfer original models for federated learning. As a result, the proposed method has achieved a reduction of the required communication costs around 90% when compared to the conventional method for VGG16. Furthermore, we found out that the proposed method is able to reduce the communication cost of a large model more than of a small model due to the different threshold of updated parameters in each model architecture.
稀疏通信用于联邦学习
联邦学习使用分布在大量边缘设备上的数据集在集中式服务器上训练模型。由于联邦学习不会将本地数据从边缘设备发送到服务器,因此它保护了数据隐私。它从边缘设备传输本地模型,而不是本地数据。然而,在联邦学习中,通信成本经常是一个问题。本文提出了一种新的方法,通过只传递神经网络模型中最上层更新的参数来降低联邦学习所需的通信成本。该方法允许调整更新参数的准则,以权衡通信成本的降低和模型精度的损失。我们使用不同的模型和数据集对所提出的方法进行了评估,发现它可以达到与转移原始模型进行联邦学习相当的性能。结果表明,与VGG16的传统方法相比,所提出的方法可将所需的通信成本降低约90%。此外,我们发现,由于每种模型架构中更新参数的阈值不同,所提出的方法能够比小模型更有效地降低大模型的通信成本。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信