快速收敛的无线联合学习:基于投票的 TopK 模型压缩方法

Xiaoxin Su;Yipeng Zhou;Laizhong Cui;Quan Z. Sheng;Yinggui Wang;Song Guo
{"title":"快速收敛的无线联合学习:基于投票的 TopK 模型压缩方法","authors":"Xiaoxin Su;Yipeng Zhou;Laizhong Cui;Quan Z. Sheng;Yinggui Wang;Song Guo","doi":"10.1109/JSAC.2024.3431568","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) has been extensively exploited in the training of machine learning models to preserve data privacy. In particular, wireless FL enables multiple clients to collaboratively train models by sharing model updates via wireless communication without exposing raw data. The state-of-the-art wireless FL advocates efficient aggregation of model updates from multiple clients by over-the-air computing. However, a significant deficiency of over-the-air aggregation lies in the infeasibility of TopK model compression given that top model updates cannot be aggregated directly before they are aligned according to their indices. In view of the fact that TopK can greatly accelerate FL, we design a novel wireless FL with voting based TopK algorithm, namely WFL-VTopK, so that top model updates can be aggregated by over-the-air computing directly. Specifically, there are two phases in WFL-VTopK. In Phase 1, clients vote their top model updates, based on which global top model updates can be efficiently identified. In Phase 2, clients formally upload global top model updates so that they can be directly aggregated by over-the-air computing. Furthermore, the convergence of WFL-VTopK is theoretically guaranteed under non-convex loss. Based on the convergence of WFL-VTopK, we optimize model utility subjecting to training time and energy constraints. To validate the superiority of WFL-VTopK, we extensively conduct experiments with real datasets under wireless communication. The experimental results demonstrate that WFL-VTopK can effectively aggregate models by only communicating 1%-2% top models updates, and hence significantly outperforms the state-of-the-art baselines. By significantly reducing the wireless communication traffic, our work paves the road to train large models in wireless FL.","PeriodicalId":73294,"journal":{"name":"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society","volume":"42 11","pages":"3048-3063"},"PeriodicalIF":0.0000,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fast-Convergent Wireless Federated Learning: A Voting-Based TopK Model Compression Approach\",\"authors\":\"Xiaoxin Su;Yipeng Zhou;Laizhong Cui;Quan Z. Sheng;Yinggui Wang;Song Guo\",\"doi\":\"10.1109/JSAC.2024.3431568\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning (FL) has been extensively exploited in the training of machine learning models to preserve data privacy. In particular, wireless FL enables multiple clients to collaboratively train models by sharing model updates via wireless communication without exposing raw data. The state-of-the-art wireless FL advocates efficient aggregation of model updates from multiple clients by over-the-air computing. However, a significant deficiency of over-the-air aggregation lies in the infeasibility of TopK model compression given that top model updates cannot be aggregated directly before they are aligned according to their indices. In view of the fact that TopK can greatly accelerate FL, we design a novel wireless FL with voting based TopK algorithm, namely WFL-VTopK, so that top model updates can be aggregated by over-the-air computing directly. Specifically, there are two phases in WFL-VTopK. In Phase 1, clients vote their top model updates, based on which global top model updates can be efficiently identified. In Phase 2, clients formally upload global top model updates so that they can be directly aggregated by over-the-air computing. Furthermore, the convergence of WFL-VTopK is theoretically guaranteed under non-convex loss. Based on the convergence of WFL-VTopK, we optimize model utility subjecting to training time and energy constraints. To validate the superiority of WFL-VTopK, we extensively conduct experiments with real datasets under wireless communication. The experimental results demonstrate that WFL-VTopK can effectively aggregate models by only communicating 1%-2% top models updates, and hence significantly outperforms the state-of-the-art baselines. By significantly reducing the wireless communication traffic, our work paves the road to train large models in wireless FL.\",\"PeriodicalId\":73294,\"journal\":{\"name\":\"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society\",\"volume\":\"42 11\",\"pages\":\"3048-3063\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10605794/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10605794/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

联合学习(FL)已被广泛应用于机器学习模型的训练,以保护数据隐私。其中,无线联合学习通过无线通信共享模型更新,使多个客户端能够协同训练模型,而不会暴露原始数据。最先进的无线 FL 技术主张通过空中计算对来自多个客户端的模型更新进行高效聚合。然而,空中聚合的一个重大缺陷在于 TopK 模型压缩的不可行性,因为顶层模型更新在根据其指数对齐之前无法直接聚合。鉴于 TopK 可以大大加快 FL 的速度,我们设计了一种新颖的基于投票的 TopK 无线 FL 算法,即 WFL-VTopK,从而可以通过空中计算直接聚合顶层模型更新。具体来说,WFL-VTopK分为两个阶段。在第一阶段,客户对其顶级模型更新进行投票,并在此基础上有效识别全球顶级模型更新。在第二阶段,客户端正式上传全球顶级模型更新,以便通过空中计算直接聚合。此外,WFL-VTopK 的收敛性在非凸损耗条件下得到了理论保证。基于 WFL-VTopK 的收敛性,我们优化了受训练时间和能量限制的模型效用。为了验证 WFL-VTopK 的优越性,我们利用无线通信下的真实数据集进行了大量实验。实验结果表明,WFL-VTopK 只需传输 1%-2%的顶级模型更新,就能有效地聚合模型,因此显著优于最先进的基线。通过大幅减少无线通信流量,我们的工作为在无线 FL 中训练大型模型铺平了道路。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Fast-Convergent Wireless Federated Learning: A Voting-Based TopK Model Compression Approach
Federated learning (FL) has been extensively exploited in the training of machine learning models to preserve data privacy. In particular, wireless FL enables multiple clients to collaboratively train models by sharing model updates via wireless communication without exposing raw data. The state-of-the-art wireless FL advocates efficient aggregation of model updates from multiple clients by over-the-air computing. However, a significant deficiency of over-the-air aggregation lies in the infeasibility of TopK model compression given that top model updates cannot be aggregated directly before they are aligned according to their indices. In view of the fact that TopK can greatly accelerate FL, we design a novel wireless FL with voting based TopK algorithm, namely WFL-VTopK, so that top model updates can be aggregated by over-the-air computing directly. Specifically, there are two phases in WFL-VTopK. In Phase 1, clients vote their top model updates, based on which global top model updates can be efficiently identified. In Phase 2, clients formally upload global top model updates so that they can be directly aggregated by over-the-air computing. Furthermore, the convergence of WFL-VTopK is theoretically guaranteed under non-convex loss. Based on the convergence of WFL-VTopK, we optimize model utility subjecting to training time and energy constraints. To validate the superiority of WFL-VTopK, we extensively conduct experiments with real datasets under wireless communication. The experimental results demonstrate that WFL-VTopK can effectively aggregate models by only communicating 1%-2% top models updates, and hence significantly outperforms the state-of-the-art baselines. By significantly reducing the wireless communication traffic, our work paves the road to train large models in wireless FL.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信