UCL: Unit Competition of Layers for Streaming Tasks in Heterogeneous Networks

Jing Yu, Liantao Wu, Guoliang Gao, Chenyu Gong
{"title":"UCL: Unit Competition of Layers for Streaming Tasks in Heterogeneous Networks","authors":"Jing Yu, Liantao Wu, Guoliang Gao, Chenyu Gong","doi":"10.1109/GLOBECOM48099.2022.10000741","DOIUrl":null,"url":null,"abstract":"Partitioning and offloading the deep neural network (DNN) model over multi-tier computing units have been recently proposed to shorten the inference time. However, the state-of-the-art cannot adapt to large-scale offloading problems for streaming tasks because of its exponential complexity. Besides, as an essential kind of DNNs, the offloading of grouped con-volutional neural networks (GCNNs) has not been explored yet. Motivated by the above facts, in this paper, we concentrate on the offloading of chained DNNs (CDNNs) and GCNNs for streaming tasks. Consider a typical heterogeneous network consisting of various computing units, the user equipment (UE) publishes computation-intensive and delay-sensitive streaming DNN tasks while computing units accomplish them collaboratively. To mini-mize the delay of processing the task stream, DNN layers should be offloaded to appropriate units, which is the streaming-task multi-unit (STMU) problem. To tackle this problem, we formulate a non-cooperative potential game called unit competition of layers (UCL). The theoretical analysis proves the existence of the Nash equilibrium (NE), and the corresponding algorithm with linear complexity is developed to achieve the NE. Finally, extensive experiments demonstrate that UCL outperforms the state-of-the-art significantly in large-scale scenarios while maintaining similar performance on small-scale tasks.","PeriodicalId":313199,"journal":{"name":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GLOBECOM48099.2022.10000741","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Partitioning and offloading the deep neural network (DNN) model over multi-tier computing units have been recently proposed to shorten the inference time. However, the state-of-the-art cannot adapt to large-scale offloading problems for streaming tasks because of its exponential complexity. Besides, as an essential kind of DNNs, the offloading of grouped con-volutional neural networks (GCNNs) has not been explored yet. Motivated by the above facts, in this paper, we concentrate on the offloading of chained DNNs (CDNNs) and GCNNs for streaming tasks. Consider a typical heterogeneous network consisting of various computing units, the user equipment (UE) publishes computation-intensive and delay-sensitive streaming DNN tasks while computing units accomplish them collaboratively. To mini-mize the delay of processing the task stream, DNN layers should be offloaded to appropriate units, which is the streaming-task multi-unit (STMU) problem. To tackle this problem, we formulate a non-cooperative potential game called unit competition of layers (UCL). The theoretical analysis proves the existence of the Nash equilibrium (NE), and the corresponding algorithm with linear complexity is developed to achieve the NE. Finally, extensive experiments demonstrate that UCL outperforms the state-of-the-art significantly in large-scale scenarios while maintaining similar performance on small-scale tasks.
异构网络中流任务层的单元竞争
为了缩短深度神经网络(DNN)模型的推理时间,近年来提出了在多层计算单元上划分和卸载深度神经网络模型的方法。然而,目前的技术由于其指数级的复杂性而无法适应流任务的大规模卸载问题。此外,分组卷积神经网络(GCNNs)作为一种重要的深度神经网络,其卸载问题尚未得到深入的研究。基于上述事实,在本文中,我们将重点研究链式DNNs (CDNNs)和GCNNs在流任务中的卸载。考虑一个由各种计算单元组成的典型异构网络,用户设备(UE)发布计算密集型和延迟敏感的流DNN任务,而计算单元协同完成这些任务。为了使任务流的处理延迟最小化,需要将DNN层卸载到适当的单元上,这就是流任务多单元(STMU)问题。为了解决这个问题,我们制定了一个非合作的潜在博弈,称为层的单位竞争(UCL)。理论分析证明了纳什均衡的存在性,并提出了相应的线性复杂度算法来实现纳什均衡。最后,广泛的实验表明,UCL在大规模场景中显著优于最先进的技术,同时在小规模任务中保持相似的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信