分布式编码边缘学习的编码感知速率分割

Tianheng Li, Jingzhe Zhang, Xiaofan He
{"title":"分布式编码边缘学习的编码感知速率分割","authors":"Tianheng Li, Jingzhe Zhang, Xiaofan He","doi":"10.1109/INFOCOMWKSHPS57453.2023.10226011","DOIUrl":null,"url":null,"abstract":"Driven by the explosive escalation of machine learning applications, considerable efforts have been devoted to distributed edge learning. To alleviate the so-called straggling issue, coded computing that injects elaborate redundancy into computation emerges as a promising solution, which in turn ignites the recent research interests in distributed coded edge learning. Albeit effectively mitigating straggling, coded edge learning brings new challenges in communications. In particular, existing transmission schemes are mainly designed for conventional distributed edge learning, where the data offloaded to different edge nodes (ENs) are non-overlapping. They cannot achieve the best performance when applied directly to distributed coded edge learning, due to the redundancy among the data for different ENs in the coded settings. To the best of our knowledge, a tailor-designed transmission scheme for distributed coded edge learning still remains open. With this consideration, a novel coding-aware rate splitting scheme is proposed in this work, which splits the data to different ENs in a coding-aware way to avoid transmission redundancy and enables multiple simultaneous multi-casts to the ENs. To minimize the overall processing latency, an iterative optimization algorithm is developed based on the concave-convex procedure (CCCP) framework. Simulations demonstrate that the proposed scheme can substantially reduce the overall latency of distributed coded edge learning as compared to the baselines.","PeriodicalId":354290,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","volume":"65 4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Coding-Aware Rate Splitting for Distributed Coded Edge Learning\",\"authors\":\"Tianheng Li, Jingzhe Zhang, Xiaofan He\",\"doi\":\"10.1109/INFOCOMWKSHPS57453.2023.10226011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Driven by the explosive escalation of machine learning applications, considerable efforts have been devoted to distributed edge learning. To alleviate the so-called straggling issue, coded computing that injects elaborate redundancy into computation emerges as a promising solution, which in turn ignites the recent research interests in distributed coded edge learning. Albeit effectively mitigating straggling, coded edge learning brings new challenges in communications. In particular, existing transmission schemes are mainly designed for conventional distributed edge learning, where the data offloaded to different edge nodes (ENs) are non-overlapping. They cannot achieve the best performance when applied directly to distributed coded edge learning, due to the redundancy among the data for different ENs in the coded settings. To the best of our knowledge, a tailor-designed transmission scheme for distributed coded edge learning still remains open. With this consideration, a novel coding-aware rate splitting scheme is proposed in this work, which splits the data to different ENs in a coding-aware way to avoid transmission redundancy and enables multiple simultaneous multi-casts to the ENs. To minimize the overall processing latency, an iterative optimization algorithm is developed based on the concave-convex procedure (CCCP) framework. Simulations demonstrate that the proposed scheme can substantially reduce the overall latency of distributed coded edge learning as compared to the baselines.\",\"PeriodicalId\":354290,\"journal\":{\"name\":\"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)\",\"volume\":\"65 4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/INFOCOMWKSHPS57453.2023.10226011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOCOMWKSHPS57453.2023.10226011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在机器学习应用爆炸式增长的推动下,分布式边缘学习已经投入了大量的努力。为了缓解所谓的离散问题,在计算中注入复杂冗余的编码计算成为一种有前途的解决方案,这反过来又引发了最近对分布式编码边缘学习的研究兴趣。编码边缘学习虽然有效地缓解了混乱,但也给通信带来了新的挑战。特别是,现有的传输方案主要针对传统的分布式边缘学习设计,其中数据卸载到不同的边缘节点(ENs)是不重叠的。当直接应用于分布式编码边缘学习时,由于编码设置中不同en的数据之间存在冗余,它们无法达到最佳性能。据我们所知,为分布式编码边缘学习量身定制的传输方案仍然是开放的。为此,本文提出了一种新的编码感知速率分割方案,该方案以编码感知的方式将数据分割到不同的网络上,以避免传输冗余,并使多个网络同时进行多播。为了最大限度地减少整体处理延迟,本文提出了一种基于凹-凸过程(CCCP)框架的迭代优化算法。仿真结果表明,与基线算法相比,该算法可以显著降低分布式编码边缘学习的总体延迟。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Coding-Aware Rate Splitting for Distributed Coded Edge Learning
Driven by the explosive escalation of machine learning applications, considerable efforts have been devoted to distributed edge learning. To alleviate the so-called straggling issue, coded computing that injects elaborate redundancy into computation emerges as a promising solution, which in turn ignites the recent research interests in distributed coded edge learning. Albeit effectively mitigating straggling, coded edge learning brings new challenges in communications. In particular, existing transmission schemes are mainly designed for conventional distributed edge learning, where the data offloaded to different edge nodes (ENs) are non-overlapping. They cannot achieve the best performance when applied directly to distributed coded edge learning, due to the redundancy among the data for different ENs in the coded settings. To the best of our knowledge, a tailor-designed transmission scheme for distributed coded edge learning still remains open. With this consideration, a novel coding-aware rate splitting scheme is proposed in this work, which splits the data to different ENs in a coding-aware way to avoid transmission redundancy and enables multiple simultaneous multi-casts to the ENs. To minimize the overall processing latency, an iterative optimization algorithm is developed based on the concave-convex procedure (CCCP) framework. Simulations demonstrate that the proposed scheme can substantially reduce the overall latency of distributed coded edge learning as compared to the baselines.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信