{"title":"分布式编码边缘学习的编码感知速率分割","authors":"Tianheng Li, Jingzhe Zhang, Xiaofan He","doi":"10.1109/INFOCOMWKSHPS57453.2023.10226011","DOIUrl":null,"url":null,"abstract":"Driven by the explosive escalation of machine learning applications, considerable efforts have been devoted to distributed edge learning. To alleviate the so-called straggling issue, coded computing that injects elaborate redundancy into computation emerges as a promising solution, which in turn ignites the recent research interests in distributed coded edge learning. Albeit effectively mitigating straggling, coded edge learning brings new challenges in communications. In particular, existing transmission schemes are mainly designed for conventional distributed edge learning, where the data offloaded to different edge nodes (ENs) are non-overlapping. They cannot achieve the best performance when applied directly to distributed coded edge learning, due to the redundancy among the data for different ENs in the coded settings. To the best of our knowledge, a tailor-designed transmission scheme for distributed coded edge learning still remains open. With this consideration, a novel coding-aware rate splitting scheme is proposed in this work, which splits the data to different ENs in a coding-aware way to avoid transmission redundancy and enables multiple simultaneous multi-casts to the ENs. To minimize the overall processing latency, an iterative optimization algorithm is developed based on the concave-convex procedure (CCCP) framework. Simulations demonstrate that the proposed scheme can substantially reduce the overall latency of distributed coded edge learning as compared to the baselines.","PeriodicalId":354290,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","volume":"65 4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Coding-Aware Rate Splitting for Distributed Coded Edge Learning\",\"authors\":\"Tianheng Li, Jingzhe Zhang, Xiaofan He\",\"doi\":\"10.1109/INFOCOMWKSHPS57453.2023.10226011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Driven by the explosive escalation of machine learning applications, considerable efforts have been devoted to distributed edge learning. To alleviate the so-called straggling issue, coded computing that injects elaborate redundancy into computation emerges as a promising solution, which in turn ignites the recent research interests in distributed coded edge learning. Albeit effectively mitigating straggling, coded edge learning brings new challenges in communications. In particular, existing transmission schemes are mainly designed for conventional distributed edge learning, where the data offloaded to different edge nodes (ENs) are non-overlapping. They cannot achieve the best performance when applied directly to distributed coded edge learning, due to the redundancy among the data for different ENs in the coded settings. To the best of our knowledge, a tailor-designed transmission scheme for distributed coded edge learning still remains open. With this consideration, a novel coding-aware rate splitting scheme is proposed in this work, which splits the data to different ENs in a coding-aware way to avoid transmission redundancy and enables multiple simultaneous multi-casts to the ENs. To minimize the overall processing latency, an iterative optimization algorithm is developed based on the concave-convex procedure (CCCP) framework. Simulations demonstrate that the proposed scheme can substantially reduce the overall latency of distributed coded edge learning as compared to the baselines.\",\"PeriodicalId\":354290,\"journal\":{\"name\":\"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)\",\"volume\":\"65 4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/INFOCOMWKSHPS57453.2023.10226011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOCOMWKSHPS57453.2023.10226011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Coding-Aware Rate Splitting for Distributed Coded Edge Learning
Driven by the explosive escalation of machine learning applications, considerable efforts have been devoted to distributed edge learning. To alleviate the so-called straggling issue, coded computing that injects elaborate redundancy into computation emerges as a promising solution, which in turn ignites the recent research interests in distributed coded edge learning. Albeit effectively mitigating straggling, coded edge learning brings new challenges in communications. In particular, existing transmission schemes are mainly designed for conventional distributed edge learning, where the data offloaded to different edge nodes (ENs) are non-overlapping. They cannot achieve the best performance when applied directly to distributed coded edge learning, due to the redundancy among the data for different ENs in the coded settings. To the best of our knowledge, a tailor-designed transmission scheme for distributed coded edge learning still remains open. With this consideration, a novel coding-aware rate splitting scheme is proposed in this work, which splits the data to different ENs in a coding-aware way to avoid transmission redundancy and enables multiple simultaneous multi-casts to the ENs. To minimize the overall processing latency, an iterative optimization algorithm is developed based on the concave-convex procedure (CCCP) framework. Simulations demonstrate that the proposed scheme can substantially reduce the overall latency of distributed coded edge learning as compared to the baselines.