Towards layer-wise quantization for heterogeneous federated clients

IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Yang Xu, Junhao Cheng, Hongli Xu, Changyu Guo, Yunming Liao, Zhiwei Yao
{"title":"Towards layer-wise quantization for heterogeneous federated clients","authors":"Yang Xu,&nbsp;Junhao Cheng,&nbsp;Hongli Xu,&nbsp;Changyu Guo,&nbsp;Yunming Liao,&nbsp;Zhiwei Yao","doi":"10.1016/j.comnet.2025.111223","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning (FL) has arisen to train deep learning models on massive private data, which are produced and possessed by geographically dispersed clients at the network edge. However, in edge computing scenarios, FL usually suffers from the constrained and heterogeneous communication resource. To achieve communication-efficient FL, we concentrate on the technique of model quantization. The existing researches in FL mainly perform model quantization at the grain of the entire model. However, according to our empirical analysis, when quantizing each layer of a model with the same quantization level, the amount of saved memory differs significantly across layers. Besides, the model exhibits different decreases in test accuracy when each layer is separately quantized to the same degree. To this end, we propose a more efficient and flexible Layer-wise Quantization scheme for FL, termed FedLQ. We further theoretically analyze the relationship between the convergence bound and the quantization level. Furthermore, considering that the quantization of each layer will yield different effects on the communication cost and model accuracy, we develop a joint metric (<em>i.e.</em>, layer significance) to evaluate the comprehensive influence of layer-wise quantization on model training, and design a significance-aware algorithm to determine adaptive layer-wise quantization levels for different clients. Extensive experiments in simulation environment illustrate that FedLQ is able to effectively reduce communication consumption while still achieving promising accuracy even with low-bit quantization. Compared to the baselines, FedLQ can achieve up to 5.77<span><math><mo>×</mo></math></span> speedup when reaching the target accuracy, or obtain at most 27% improvement in test accuracy under low-bits quantization scenarios.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"264 ","pages":"Article 111223"},"PeriodicalIF":4.4000,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128625001914","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning (FL) has arisen to train deep learning models on massive private data, which are produced and possessed by geographically dispersed clients at the network edge. However, in edge computing scenarios, FL usually suffers from the constrained and heterogeneous communication resource. To achieve communication-efficient FL, we concentrate on the technique of model quantization. The existing researches in FL mainly perform model quantization at the grain of the entire model. However, according to our empirical analysis, when quantizing each layer of a model with the same quantization level, the amount of saved memory differs significantly across layers. Besides, the model exhibits different decreases in test accuracy when each layer is separately quantized to the same degree. To this end, we propose a more efficient and flexible Layer-wise Quantization scheme for FL, termed FedLQ. We further theoretically analyze the relationship between the convergence bound and the quantization level. Furthermore, considering that the quantization of each layer will yield different effects on the communication cost and model accuracy, we develop a joint metric (i.e., layer significance) to evaluate the comprehensive influence of layer-wise quantization on model training, and design a significance-aware algorithm to determine adaptive layer-wise quantization levels for different clients. Extensive experiments in simulation environment illustrate that FedLQ is able to effectively reduce communication consumption while still achieving promising accuracy even with low-bit quantization. Compared to the baselines, FedLQ can achieve up to 5.77× speedup when reaching the target accuracy, or obtain at most 27% improvement in test accuracy under low-bits quantization scenarios.
面向异构联邦客户端的分层量化
联邦学习(FL)的兴起是为了在地理上分散在网络边缘的客户端产生和拥有的大量私有数据上训练深度学习模型。然而,在边缘计算场景下,FL通常会受到约束和异构通信资源的影响。为了实现高效通信,我们重点研究了模型量化技术。现有的FL研究主要是在整个模型的粒度上进行模型量化。然而,根据我们的经验分析,当用相同的量化级别量化模型的每一层时,不同层之间节省的内存量差异很大。此外,当各层分别量化到相同程度时,模型的测试精度下降幅度不同。为此,我们提出了一种更有效和灵活的分层量化方案,称为FedLQ。进一步从理论上分析了收敛界与量化水平的关系。此外,考虑到每一层的量化会对通信成本和模型精度产生不同的影响,我们开发了一个联合度量(即层重要性)来评估分层量化对模型训练的综合影响,并设计了一个意义感知算法来确定不同客户端的自适应分层量化水平。在仿真环境中进行的大量实验表明,FedLQ能够有效地降低通信消耗,即使在低比特量化的情况下也能获得很好的精度。与基线相比,FedLQ在达到目标精度时可以达到5.77倍的加速,或者在低比特量化场景下最多可以获得27%的测试精度提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computer Networks
Computer Networks 工程技术-电信学
CiteScore
10.80
自引率
3.60%
发文量
434
审稿时长
8.6 months
期刊介绍: Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信