Zheng Qin;Gang Feng;YiJing Liu;Takshing P. Yum;Fei Wang;Jun Wang
{"title":"基于增量模型量化和上传的无线网络高效联邦学习","authors":"Zheng Qin;Gang Feng;YiJing Liu;Takshing P. Yum;Fei Wang;Jun Wang","doi":"10.1109/TNSE.2025.3546333","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) has been widely recognized as a promising promoter for future intelligent wireless networks, by collaboratively training a global machine learning (ML) model in a privacy-preserving manner. However, the transmission of large-scale models between clients and servers is susceptible to limited communication resources. Recently proposed model quantization can reduce communication costs by compressing the amount of model data to be transmitted. These methods need to be modified when used in wireless networks with rapidly changing radio channels. In this paper, we propose a federated learning scheme with incremental model quantization and uploading mechanism, called Fed_IQ. Specifically, individual clients quantize the local model parameters to derive the base and incremental model parameters. The base model is first uploaded, while the incremental model is uploaded when the wireless link is sufficiently good. The quantization levels are also adapted to the instantaneous channel states. The server then uses only the base model or combines the base and incremental model to aggregate a more accurate global model. Experimental results show our proposed Fed_IQ can significantly reduce transmission delay and improve model accuracy in a wireless network compared with a number of known state-of-the-art algorithms.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 3","pages":"2217-2230"},"PeriodicalIF":6.7000,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Efficient Federated Learning in Wireless Networks With Incremental Model Quantization and Uploading\",\"authors\":\"Zheng Qin;Gang Feng;YiJing Liu;Takshing P. Yum;Fei Wang;Jun Wang\",\"doi\":\"10.1109/TNSE.2025.3546333\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated Learning (FL) has been widely recognized as a promising promoter for future intelligent wireless networks, by collaboratively training a global machine learning (ML) model in a privacy-preserving manner. However, the transmission of large-scale models between clients and servers is susceptible to limited communication resources. Recently proposed model quantization can reduce communication costs by compressing the amount of model data to be transmitted. These methods need to be modified when used in wireless networks with rapidly changing radio channels. In this paper, we propose a federated learning scheme with incremental model quantization and uploading mechanism, called Fed_IQ. Specifically, individual clients quantize the local model parameters to derive the base and incremental model parameters. The base model is first uploaded, while the incremental model is uploaded when the wireless link is sufficiently good. The quantization levels are also adapted to the instantaneous channel states. The server then uses only the base model or combines the base and incremental model to aggregate a more accurate global model. Experimental results show our proposed Fed_IQ can significantly reduce transmission delay and improve model accuracy in a wireless network compared with a number of known state-of-the-art algorithms.\",\"PeriodicalId\":54229,\"journal\":{\"name\":\"IEEE Transactions on Network Science and Engineering\",\"volume\":\"12 3\",\"pages\":\"2217-2230\"},\"PeriodicalIF\":6.7000,\"publicationDate\":\"2025-02-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Network Science and Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10906453/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Network Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10906453/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
Efficient Federated Learning in Wireless Networks With Incremental Model Quantization and Uploading
Federated Learning (FL) has been widely recognized as a promising promoter for future intelligent wireless networks, by collaboratively training a global machine learning (ML) model in a privacy-preserving manner. However, the transmission of large-scale models between clients and servers is susceptible to limited communication resources. Recently proposed model quantization can reduce communication costs by compressing the amount of model data to be transmitted. These methods need to be modified when used in wireless networks with rapidly changing radio channels. In this paper, we propose a federated learning scheme with incremental model quantization and uploading mechanism, called Fed_IQ. Specifically, individual clients quantize the local model parameters to derive the base and incremental model parameters. The base model is first uploaded, while the incremental model is uploaded when the wireless link is sufficiently good. The quantization levels are also adapted to the instantaneous channel states. The server then uses only the base model or combines the base and incremental model to aggregate a more accurate global model. Experimental results show our proposed Fed_IQ can significantly reduce transmission delay and improve model accuracy in a wireless network compared with a number of known state-of-the-art algorithms.
期刊介绍:
The proposed journal, called the IEEE Transactions on Network Science and Engineering (TNSE), is committed to timely publishing of peer-reviewed technical articles that deal with the theory and applications of network science and the interconnections among the elements in a system that form a network. In particular, the IEEE Transactions on Network Science and Engineering publishes articles on understanding, prediction, and control of structures and behaviors of networks at the fundamental level. The types of networks covered include physical or engineered networks, information networks, biological networks, semantic networks, economic networks, social networks, and ecological networks. Aimed at discovering common principles that govern network structures, network functionalities and behaviors of networks, the journal seeks articles on understanding, prediction, and control of structures and behaviors of networks. Another trans-disciplinary focus of the IEEE Transactions on Network Science and Engineering is the interactions between and co-evolution of different genres of networks.