Yijing Liu, Shuang Qin, Gang Feng, D. Niyato, Yao Sun, Jianhong Zhou
{"title":"基于集成蒸馏的自适应量化支持FL启用边缘智能","authors":"Yijing Liu, Shuang Qin, Gang Feng, D. Niyato, Yao Sun, Jianhong Zhou","doi":"10.1109/GLOBECOM48099.2022.10001182","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) has recently become one of the most acknowledged technologies in promoting the development of intelligent edge networks with the ever-increasing computing capability of user equipment (UE). In traditional FL paradigm, local models are usually required to be homogeneous for aggregation to achieve an accurate global model. Moreover, considerable communication cost and training time may be incurred in resource-constrained edge networks due to a large number of UEs participating in model transmission and the large size of transmitted models. Therefore, it is imperative to develop effective training schemes for heterogeneous FL models, while reducing communication cost as well as training time. In this paper, we propose an adaptive quantization scheme based on ensemble distillation (AQeD) for FL to facilitate personalized quantized model training over heterogeneous local models with different size, structure, and quantization level, etc. Specifically, we design an augmented loss function by jointly considering distillation loss function, quantization values and available wireless resources, where UEs train their local personalized machine learning models and send the quantized models to a server. Based on local quantized models, the server first performs global aggregation for cluster ensembles and then sends the aggregated model of the cluster back to the participating UEs. Numerical results show that our proposed AQeD scheme can significantly reduce communication cost as well as training time in comparison with some known state-of-the-art solutions.","PeriodicalId":313199,"journal":{"name":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Adaptive Quantization based on Ensemble Distillation to Support FL enabled Edge Intelligence\",\"authors\":\"Yijing Liu, Shuang Qin, Gang Feng, D. Niyato, Yao Sun, Jianhong Zhou\",\"doi\":\"10.1109/GLOBECOM48099.2022.10001182\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning (FL) has recently become one of the most acknowledged technologies in promoting the development of intelligent edge networks with the ever-increasing computing capability of user equipment (UE). In traditional FL paradigm, local models are usually required to be homogeneous for aggregation to achieve an accurate global model. Moreover, considerable communication cost and training time may be incurred in resource-constrained edge networks due to a large number of UEs participating in model transmission and the large size of transmitted models. Therefore, it is imperative to develop effective training schemes for heterogeneous FL models, while reducing communication cost as well as training time. In this paper, we propose an adaptive quantization scheme based on ensemble distillation (AQeD) for FL to facilitate personalized quantized model training over heterogeneous local models with different size, structure, and quantization level, etc. Specifically, we design an augmented loss function by jointly considering distillation loss function, quantization values and available wireless resources, where UEs train their local personalized machine learning models and send the quantized models to a server. Based on local quantized models, the server first performs global aggregation for cluster ensembles and then sends the aggregated model of the cluster back to the participating UEs. Numerical results show that our proposed AQeD scheme can significantly reduce communication cost as well as training time in comparison with some known state-of-the-art solutions.\",\"PeriodicalId\":313199,\"journal\":{\"name\":\"GLOBECOM 2022 - 2022 IEEE Global Communications Conference\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"GLOBECOM 2022 - 2022 IEEE Global Communications Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/GLOBECOM48099.2022.10001182\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GLOBECOM48099.2022.10001182","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Adaptive Quantization based on Ensemble Distillation to Support FL enabled Edge Intelligence
Federated learning (FL) has recently become one of the most acknowledged technologies in promoting the development of intelligent edge networks with the ever-increasing computing capability of user equipment (UE). In traditional FL paradigm, local models are usually required to be homogeneous for aggregation to achieve an accurate global model. Moreover, considerable communication cost and training time may be incurred in resource-constrained edge networks due to a large number of UEs participating in model transmission and the large size of transmitted models. Therefore, it is imperative to develop effective training schemes for heterogeneous FL models, while reducing communication cost as well as training time. In this paper, we propose an adaptive quantization scheme based on ensemble distillation (AQeD) for FL to facilitate personalized quantized model training over heterogeneous local models with different size, structure, and quantization level, etc. Specifically, we design an augmented loss function by jointly considering distillation loss function, quantization values and available wireless resources, where UEs train their local personalized machine learning models and send the quantized models to a server. Based on local quantized models, the server first performs global aggregation for cluster ensembles and then sends the aggregated model of the cluster back to the participating UEs. Numerical results show that our proposed AQeD scheme can significantly reduce communication cost as well as training time in comparison with some known state-of-the-art solutions.