FlocOff: Data Heterogeneity Resilient Federated Learning With Communication-Efficient Edge Offloading

Mulei Ma;Chenyu Gong;Liekang Zeng;Yang Yang;Liantao Wu
{"title":"FlocOff: Data Heterogeneity Resilient Federated Learning With Communication-Efficient Edge Offloading","authors":"Mulei Ma;Chenyu Gong;Liekang Zeng;Yang Yang;Liantao Wu","doi":"10.1109/JSAC.2024.3431526","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) has emerged as a fundamental learning paradigm to harness massive data scattered at geo-distributed edge devices in a privacy-preserving way. Given the heterogeneous deployment of edge devices, however, their data are usually Non-IID, introducing significant challenges to FL including degraded training accuracy, intensive communication costs, and high computing complexity. Towards that, traditional approaches typically utilize adaptive mechanisms, which may suffer from scalability issues, increased computational overhead, and limited adaptability to diverse edge environments. To address that, this paper instead leverages the observation that the computation offloading involves inherent functionalities such as node matching and service correlation to achieve data reshaping and proposes \n<underline>F</u>\nederated \n<underline>l</u>\nearning based \n<underline>o</u>\nn \n<underline>c</u>\nomputing \n<underline>Off</u>\nloading (FlocOff) framework, to address data heterogeneity and resource-constrained challenges. Specifically, FlocOff formulates the FL process with Non-IID data in edge scenarios and derives rigorous analysis on the impact of imbalanced data distribution. Based on this, FlocOff decouples the optimization in two steps, namely: 1) Minimizes the Kullback-Leibler (KL) divergence via Computation Offloading scheduling (MKL-CO); 2) Minimizes the Communication Cost through Resource Allocation (MCC-RA). Extensive experimental results demonstrate that the proposed FlocOff effectively improves model convergence and accuracy by 14.3%-32.7% while reducing data heterogeneity under various data distributions.","PeriodicalId":73294,"journal":{"name":"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society","volume":"42 11","pages":"3262-3277"},"PeriodicalIF":0.0000,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10605763/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning (FL) has emerged as a fundamental learning paradigm to harness massive data scattered at geo-distributed edge devices in a privacy-preserving way. Given the heterogeneous deployment of edge devices, however, their data are usually Non-IID, introducing significant challenges to FL including degraded training accuracy, intensive communication costs, and high computing complexity. Towards that, traditional approaches typically utilize adaptive mechanisms, which may suffer from scalability issues, increased computational overhead, and limited adaptability to diverse edge environments. To address that, this paper instead leverages the observation that the computation offloading involves inherent functionalities such as node matching and service correlation to achieve data reshaping and proposes F ederated l earning based o n c omputing Off loading (FlocOff) framework, to address data heterogeneity and resource-constrained challenges. Specifically, FlocOff formulates the FL process with Non-IID data in edge scenarios and derives rigorous analysis on the impact of imbalanced data distribution. Based on this, FlocOff decouples the optimization in two steps, namely: 1) Minimizes the Kullback-Leibler (KL) divergence via Computation Offloading scheduling (MKL-CO); 2) Minimizes the Communication Cost through Resource Allocation (MCC-RA). Extensive experimental results demonstrate that the proposed FlocOff effectively improves model convergence and accuracy by 14.3%-32.7% while reducing data heterogeneity under various data distributions.
FlocOff:具有通信效率边缘卸载功能的数据异构弹性联合学习
联盟学习(Federated Learning,FL)已成为一种基本的学习范式,可用于以保护隐私的方式利用分散在地理分布边缘设备上的海量数据。然而,鉴于边缘设备的异构部署,它们的数据通常都是非 IID 数据,这给联合学习带来了巨大挑战,包括训练精度下降、通信成本高昂和计算复杂度高等。为此,传统方法通常采用自适应机制,但这种机制可能存在可扩展性问题、计算开销增加以及对多样化边缘环境的适应性有限等问题。为解决这一问题,本文转而利用计算卸载涉及节点匹配和服务相关性等固有功能的观点来实现数据重塑,并提出了基于计算卸载的联合学习(FlocOff)框架,以应对数据异构性和资源受限的挑战。具体而言,FlocOff 对边缘场景中非 IID 数据的 FL 流程进行了表述,并对不平衡数据分布的影响进行了严格分析。在此基础上,FlocOff 分两步进行优化,即1) 通过计算卸载调度(MKL-CO)最小化库尔巴克-莱布勒(KL)分歧;2) 通过资源分配(MCC-RA)最小化通信成本。广泛的实验结果表明,所提出的 FlocOff 有效地提高了模型收敛性和准确性 14.3%-32.7%,同时减少了各种数据分布下的数据异质性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信