带宽受限网络中基于重叠的分散式联合学习

Yudi Huang, Tingyang Sun, Ting He
{"title":"带宽受限网络中基于重叠的分散式联合学习","authors":"Yudi Huang, Tingyang Sun, Ting He","doi":"arxiv-2408.04705","DOIUrl":null,"url":null,"abstract":"The emerging machine learning paradigm of decentralized federated learning\n(DFL) has the promise of greatly boosting the deployment of artificial\nintelligence (AI) by directly learning across distributed agents without\ncentralized coordination. Despite significant efforts on improving the\ncommunication efficiency of DFL, most existing solutions were based on the\nsimplistic assumption that neighboring agents are physically adjacent in the\nunderlying communication network, which fails to correctly capture the\ncommunication cost when learning over a general bandwidth-limited network, as\nencountered in many edge networks. In this work, we address this gap by\nleveraging recent advances in network tomography to jointly design the\ncommunication demands and the communication schedule for overlay-based DFL in\nbandwidth-limited networks without requiring explicit cooperation from the\nunderlying network. By carefully analyzing the structure of our problem, we\ndecompose it into a series of optimization problems that can each be solved\nefficiently, to collectively minimize the total training time. Extensive\ndata-driven simulations show that our solution can significantly accelerate DFL\nin comparison with state-of-the-art designs.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Overlay-based Decentralized Federated Learning in Bandwidth-limited Networks\",\"authors\":\"Yudi Huang, Tingyang Sun, Ting He\",\"doi\":\"arxiv-2408.04705\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The emerging machine learning paradigm of decentralized federated learning\\n(DFL) has the promise of greatly boosting the deployment of artificial\\nintelligence (AI) by directly learning across distributed agents without\\ncentralized coordination. Despite significant efforts on improving the\\ncommunication efficiency of DFL, most existing solutions were based on the\\nsimplistic assumption that neighboring agents are physically adjacent in the\\nunderlying communication network, which fails to correctly capture the\\ncommunication cost when learning over a general bandwidth-limited network, as\\nencountered in many edge networks. In this work, we address this gap by\\nleveraging recent advances in network tomography to jointly design the\\ncommunication demands and the communication schedule for overlay-based DFL in\\nbandwidth-limited networks without requiring explicit cooperation from the\\nunderlying network. By carefully analyzing the structure of our problem, we\\ndecompose it into a series of optimization problems that can each be solved\\nefficiently, to collectively minimize the total training time. Extensive\\ndata-driven simulations show that our solution can significantly accelerate DFL\\nin comparison with state-of-the-art designs.\",\"PeriodicalId\":501280,\"journal\":{\"name\":\"arXiv - CS - Networking and Internet Architecture\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Networking and Internet Architecture\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.04705\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Networking and Internet Architecture","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.04705","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

去中心化联合学习(DFL)是一种新兴的机器学习范式,通过在分布式代理之间直接学习而无需集中协调,有望极大地促进人工智能(AI)的应用。尽管在提高 DFL 的通信效率方面做出了巨大努力,但现有的大多数解决方案都基于一个简单的假设,即相邻代理在底层通信网络中物理上是相邻的,这就无法正确捕捉在一般带宽受限网络上学习时的通信成本,而在许多边缘网络中都会遇到这种情况。在这项工作中,我们利用网络层析技术的最新进展解决了这一问题,为基于覆盖的 DFL 在带宽受限网络中联合设计了通信需求和通信时间表,而不需要底层网络的明确合作。通过仔细分析我们的问题结构,我们将其分解为一系列优化问题,每个问题都可以高效地求解,从而使总训练时间最小化。广泛的数据驱动仿真表明,与最先进的设计相比,我们的解决方案可以显著加快 DFL 的速度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Overlay-based Decentralized Federated Learning in Bandwidth-limited Networks
The emerging machine learning paradigm of decentralized federated learning (DFL) has the promise of greatly boosting the deployment of artificial intelligence (AI) by directly learning across distributed agents without centralized coordination. Despite significant efforts on improving the communication efficiency of DFL, most existing solutions were based on the simplistic assumption that neighboring agents are physically adjacent in the underlying communication network, which fails to correctly capture the communication cost when learning over a general bandwidth-limited network, as encountered in many edge networks. In this work, we address this gap by leveraging recent advances in network tomography to jointly design the communication demands and the communication schedule for overlay-based DFL in bandwidth-limited networks without requiring explicit cooperation from the underlying network. By carefully analyzing the structure of our problem, we decompose it into a series of optimization problems that can each be solved efficiently, to collectively minimize the total training time. Extensive data-driven simulations show that our solution can significantly accelerate DFL in comparison with state-of-the-art designs.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信