Resilient and Communication Efficient Learning for Heterogeneous Federated Systems.

Zhuangdi Zhu, Junyuan Hong, Steve Drew, Jiayu Zhou
{"title":"Resilient and Communication Efficient Learning for Heterogeneous Federated Systems.","authors":"Zhuangdi Zhu, Junyuan Hong, Steve Drew, Jiayu Zhou","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>The rise of Federated Learning (FL) is bringing machine learning to edge computing by utilizing data scattered across edge devices. However, the heterogeneity of edge network topologies and the uncertainty of wireless transmission are two major obstructions of FL's wide application in edge computing, leading to prohibitive convergence time and high communication cost. In this work, we propose an FL scheme to address both challenges simultaneously. Specifically, we enable edge devices to learn self-distilled neural networks that are readily prunable to arbitrary sizes, which capture the knowledge of the learning domain in a nested and progressive manner. Not only does our approach tackle system heterogeneity by serving edge devices with varying model architectures, but it also alleviates the issue of connection uncertainty by allowing transmitting part of the model parameters under faulty network connections, without wasting the contributing knowledge of the transmitted parameters. Extensive empirical studies show that under system heterogeneity and network instability, our approach demonstrates significant resilience and higher communication efficiency compared to the state-of-the-art.</p>","PeriodicalId":74504,"journal":{"name":"Proceedings of machine learning research","volume":"162 ","pages":"27504-27526"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10097502/pdf/nihms-1888103.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of machine learning research","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The rise of Federated Learning (FL) is bringing machine learning to edge computing by utilizing data scattered across edge devices. However, the heterogeneity of edge network topologies and the uncertainty of wireless transmission are two major obstructions of FL's wide application in edge computing, leading to prohibitive convergence time and high communication cost. In this work, we propose an FL scheme to address both challenges simultaneously. Specifically, we enable edge devices to learn self-distilled neural networks that are readily prunable to arbitrary sizes, which capture the knowledge of the learning domain in a nested and progressive manner. Not only does our approach tackle system heterogeneity by serving edge devices with varying model architectures, but it also alleviates the issue of connection uncertainty by allowing transmitting part of the model parameters under faulty network connections, without wasting the contributing knowledge of the transmitted parameters. Extensive empirical studies show that under system heterogeneity and network instability, our approach demonstrates significant resilience and higher communication efficiency compared to the state-of-the-art.

异构联盟系统的弹性和通信效率学习。
联邦学习(FL)的兴起通过利用分散在边缘设备上的数据,将机器学习带入了边缘计算领域。然而,边缘网络拓扑结构的异质性和无线传输的不确定性是 FL 在边缘计算中广泛应用的两大障碍,导致收敛时间过长和通信成本过高。在这项工作中,我们提出了一种 FL 方案来同时应对这两大挑战。具体来说,我们使边缘设备能够学习可随时修剪为任意大小的自精馏神经网络,这些网络以嵌套和渐进的方式捕捉学习领域的知识。我们的方法不仅通过为具有不同模型架构的边缘设备提供服务来解决系统异构性问题,而且还通过允许在网络连接故障的情况下传输部分模型参数来缓解连接不确定性问题,而不会浪费传输参数的贡献知识。广泛的实证研究表明,在系统异构和网络不稳定的情况下,与最先进的方法相比,我们的方法具有显著的弹性和更高的通信效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信