Efficient Federated Learning in Wireless Networks With Incremental Model Quantization and Uploading

IF 6.7 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY
Zheng Qin;Gang Feng;YiJing Liu;Takshing P. Yum;Fei Wang;Jun Wang
{"title":"Efficient Federated Learning in Wireless Networks With Incremental Model Quantization and Uploading","authors":"Zheng Qin;Gang Feng;YiJing Liu;Takshing P. Yum;Fei Wang;Jun Wang","doi":"10.1109/TNSE.2025.3546333","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) has been widely recognized as a promising promoter for future intelligent wireless networks, by collaboratively training a global machine learning (ML) model in a privacy-preserving manner. However, the transmission of large-scale models between clients and servers is susceptible to limited communication resources. Recently proposed model quantization can reduce communication costs by compressing the amount of model data to be transmitted. These methods need to be modified when used in wireless networks with rapidly changing radio channels. In this paper, we propose a federated learning scheme with incremental model quantization and uploading mechanism, called Fed_IQ. Specifically, individual clients quantize the local model parameters to derive the base and incremental model parameters. The base model is first uploaded, while the incremental model is uploaded when the wireless link is sufficiently good. The quantization levels are also adapted to the instantaneous channel states. The server then uses only the base model or combines the base and incremental model to aggregate a more accurate global model. Experimental results show our proposed Fed_IQ can significantly reduce transmission delay and improve model accuracy in a wireless network compared with a number of known state-of-the-art algorithms.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 3","pages":"2217-2230"},"PeriodicalIF":6.7000,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Network Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10906453/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning (FL) has been widely recognized as a promising promoter for future intelligent wireless networks, by collaboratively training a global machine learning (ML) model in a privacy-preserving manner. However, the transmission of large-scale models between clients and servers is susceptible to limited communication resources. Recently proposed model quantization can reduce communication costs by compressing the amount of model data to be transmitted. These methods need to be modified when used in wireless networks with rapidly changing radio channels. In this paper, we propose a federated learning scheme with incremental model quantization and uploading mechanism, called Fed_IQ. Specifically, individual clients quantize the local model parameters to derive the base and incremental model parameters. The base model is first uploaded, while the incremental model is uploaded when the wireless link is sufficiently good. The quantization levels are also adapted to the instantaneous channel states. The server then uses only the base model or combines the base and incremental model to aggregate a more accurate global model. Experimental results show our proposed Fed_IQ can significantly reduce transmission delay and improve model accuracy in a wireless network compared with a number of known state-of-the-art algorithms.
基于增量模型量化和上传的无线网络高效联邦学习
联邦学习(FL)已被广泛认为是未来智能无线网络的有前途的推动者,通过以保护隐私的方式协同训练全球机器学习(ML)模型。然而,大型模型在客户端和服务器之间的传输容易受到通信资源有限的影响。最近提出的模型量化可以通过压缩传输的模型数据量来降低通信成本。这些方法在无线网络中使用时,需要对其进行修改。本文提出了一种具有增量模型量化和上传机制的联邦学习方案,称为Fed_IQ。具体地说,单个客户端量化局部模型参数以派生基本和增量模型参数。首先上传基本模型,当无线链路足够好时上传增量模型。量子化水平也适应于瞬时信道状态。然后,服务器仅使用基本模型或将基本模型和增量模型结合使用,以聚合更准确的全局模型。实验结果表明,与许多已知的最先进算法相比,我们提出的Fed_IQ可以显著降低无线网络中的传输延迟并提高模型精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Network Science and Engineering
IEEE Transactions on Network Science and Engineering Engineering-Control and Systems Engineering
CiteScore
12.60
自引率
9.10%
发文量
393
期刊介绍: The proposed journal, called the IEEE Transactions on Network Science and Engineering (TNSE), is committed to timely publishing of peer-reviewed technical articles that deal with the theory and applications of network science and the interconnections among the elements in a system that form a network. In particular, the IEEE Transactions on Network Science and Engineering publishes articles on understanding, prediction, and control of structures and behaviors of networks at the fundamental level. The types of networks covered include physical or engineered networks, information networks, biological networks, semantic networks, economic networks, social networks, and ecological networks. Aimed at discovering common principles that govern network structures, network functionalities and behaviors of networks, the journal seeks articles on understanding, prediction, and control of structures and behaviors of networks. Another trans-disciplinary focus of the IEEE Transactions on Network Science and Engineering is the interactions between and co-evolution of different genres of networks.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信