{"title":"Model Pruning for Wireless Federated Learning with Heterogeneous Channels and Devices","authors":"Da-Wei Wang, Chi-Kai Hsieh, Kun-Lin Chan, Feng-Tsun Chien","doi":"10.1109/APWCS60142.2023.10234035","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) enables distributed model training, ensuring user privacy and reducing communication overheads. Model pruning further improves learning efficiency by removing weight connections in neural networks, increasing inference speed and reducing model storage size. While a larger pruning ratio shortens latency in each communication round, a larger number of communication rounds is needed for convergence. In this work, a training-based pruning ratio decision policy is proposed for wireless federated learning. By jointly minimizing average gradients and training latency with a given specific time budget, we optimize the pruning ratio for each device and the total number of training rounds. Numerical results demonstrate that the proposed algorithm achieves a faster convergence rate and lower latency compared to the existing approach.","PeriodicalId":375211,"journal":{"name":"2023 VTS Asia Pacific Wireless Communications Symposium (APWCS)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 VTS Asia Pacific Wireless Communications Symposium (APWCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/APWCS60142.2023.10234035","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning (FL) enables distributed model training, ensuring user privacy and reducing communication overheads. Model pruning further improves learning efficiency by removing weight connections in neural networks, increasing inference speed and reducing model storage size. While a larger pruning ratio shortens latency in each communication round, a larger number of communication rounds is needed for convergence. In this work, a training-based pruning ratio decision policy is proposed for wireless federated learning. By jointly minimizing average gradients and training latency with a given specific time budget, we optimize the pruning ratio for each device and the total number of training rounds. Numerical results demonstrate that the proposed algorithm achieves a faster convergence rate and lower latency compared to the existing approach.