Accelerating Federated Codistillation via Adaptive Computation Amount at Network Edge

IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Zhihao Zeng;Xiaoning Zhang;Yangming Zhao;Ahmed Zoha;Muhammad Ali Imran;Yan Zhang
{"title":"Accelerating Federated Codistillation via Adaptive Computation Amount at Network Edge","authors":"Zhihao Zeng;Xiaoning Zhang;Yangming Zhao;Ahmed Zoha;Muhammad Ali Imran;Yan Zhang","doi":"10.1109/TMC.2025.3533591","DOIUrl":null,"url":null,"abstract":"The advent of Federated Learning (FL) empowers IoT devices to collectively train a shared model without local data exposure. In order to address the issue of Non-IID that causes model performance degradation, the recently proposed federated codistillation framework has shown great potential. However, due to the system heterogeneity of devices, the federated codistillation framework still faces a synchronization barrier issue, resulting in a non-negligible waiting time with a fixed computation amount (epoch or batch size) assigned. In this paper, we propose Adaptive Computation Amount Allocation (ACAA) to accelerate federated codistillation. Specifically, we leverage a criterion, solution inexactness, to quantify the computation amount. We dynamically adjust the solution inexactness of devices based on their computing power and bandwidth to enable them nearly simultaneous completion of training, reducing synchronization waiting time without sacrificing the training performance. The minimum required computation amount is determined by the coefficient of the distillation term and the gradient dissimilarity bound of Non-IID. We theoretically analyze the convergence of ACAA. Extensive experiments show that, compared to benchmark algorithms, ACAA can accelerate training by up to 5×.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 6","pages":"5584-5597"},"PeriodicalIF":7.7000,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Mobile Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10852387/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

The advent of Federated Learning (FL) empowers IoT devices to collectively train a shared model without local data exposure. In order to address the issue of Non-IID that causes model performance degradation, the recently proposed federated codistillation framework has shown great potential. However, due to the system heterogeneity of devices, the federated codistillation framework still faces a synchronization barrier issue, resulting in a non-negligible waiting time with a fixed computation amount (epoch or batch size) assigned. In this paper, we propose Adaptive Computation Amount Allocation (ACAA) to accelerate federated codistillation. Specifically, we leverage a criterion, solution inexactness, to quantify the computation amount. We dynamically adjust the solution inexactness of devices based on their computing power and bandwidth to enable them nearly simultaneous completion of training, reducing synchronization waiting time without sacrificing the training performance. The minimum required computation amount is determined by the coefficient of the distillation term and the gradient dissimilarity bound of Non-IID. We theoretically analyze the convergence of ACAA. Extensive experiments show that, compared to benchmark algorithms, ACAA can accelerate training by up to 5×.
基于网络边缘自适应计算量的联邦共蒸馏加速
联邦学习(FL)的出现使物联网设备能够在不暴露本地数据的情况下集体训练共享模型。为了解决非iid导致模型性能下降的问题,最近提出的联邦共蒸馏框架显示出很大的潜力。然而,由于设备的系统异构性,联邦共蒸馏框架仍然面临同步障碍问题,导致分配固定计算量(epoch或batch大小)的等待时间不可忽略。本文提出了自适应计算量分配(ACAA)来加速联邦共蒸馏。具体地说,我们利用一个标准,解的不精确性,来量化计算量。我们根据设备的计算能力和带宽动态调整解决方案的不精确性,使其几乎同时完成训练,在不牺牲训练性能的情况下减少同步等待时间。最小计算量由蒸馏项系数和Non-IID的梯度不相似界决定。从理论上分析了ACAA的收敛性。大量的实验表明,与基准算法相比,ACAA可以将训练速度提高5倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Mobile Computing
IEEE Transactions on Mobile Computing 工程技术-电信学
CiteScore
12.90
自引率
2.50%
发文量
403
审稿时长
6.6 months
期刊介绍: IEEE Transactions on Mobile Computing addresses key technical issues related to various aspects of mobile computing. This includes (a) architectures, (b) support services, (c) algorithm/protocol design and analysis, (d) mobile environments, (e) mobile communication systems, (f) applications, and (g) emerging technologies. Topics of interest span a wide range, covering aspects like mobile networks and hosts, mobility management, multimedia, operating system support, power management, online and mobile environments, security, scalability, reliability, and emerging technologies such as wearable computers, body area networks, and wireless sensor networks. The journal serves as a comprehensive platform for advancements in mobile computing research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信