AVEC: Accelerator Virtualization in Cloud-Edge Computing for Deep Learning Libraries

J. Kennedy, B. Varghese, C. Reaño
{"title":"AVEC: Accelerator Virtualization in Cloud-Edge Computing for Deep Learning Libraries","authors":"J. Kennedy, B. Varghese, C. Reaño","doi":"10.1109/ICFEC51620.2021.00013","DOIUrl":null,"url":null,"abstract":"Edge computing offers the distinct advantage of harnessing compute capabilities on resources located at the edge of the network to run workloads of relatively weak user devices. This is achieved by offloading computationally intensive workloads, such as deep learning from user devices to the edge. Using the edge reduces the overall communication latency of applications as workloads can be processed closer to where data is generated on user devices rather than sending them to geographically distant clouds. Specialised hardware accelerators, such as Graphics Processing Units (GPUs) available in the cloud-edge network can enhance the performance of computationally intensive workloads that are offloaded from devices on to the edge. The underlying approach required to facilitate this is virtualization of GPUs. This paper therefore sets out to investigate the potential of GPU accelerator virtualization to improve the performance of deep learning workloads in a cloud-edge environment. The AVEC accelerator virtualization framework is proposed that incurs minimum overheads and requires no source-code modification of the workload. AVEC intercepts local calls to a GPU on a device and forwards them to an edge resource seamlessly. The feasibility of AVEC is demonstrated on a real-world application, namely OpenPose using the Caffe deep learning library. It is observed that on a lab-based experimental test-bed AVEC delivers up to 7.48x speedup despite communication overheads incurred due to data transfers.","PeriodicalId":436220,"journal":{"name":"2021 IEEE 5th International Conference on Fog and Edge Computing (ICFEC)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 5th International Conference on Fog and Edge Computing (ICFEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICFEC51620.2021.00013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Edge computing offers the distinct advantage of harnessing compute capabilities on resources located at the edge of the network to run workloads of relatively weak user devices. This is achieved by offloading computationally intensive workloads, such as deep learning from user devices to the edge. Using the edge reduces the overall communication latency of applications as workloads can be processed closer to where data is generated on user devices rather than sending them to geographically distant clouds. Specialised hardware accelerators, such as Graphics Processing Units (GPUs) available in the cloud-edge network can enhance the performance of computationally intensive workloads that are offloaded from devices on to the edge. The underlying approach required to facilitate this is virtualization of GPUs. This paper therefore sets out to investigate the potential of GPU accelerator virtualization to improve the performance of deep learning workloads in a cloud-edge environment. The AVEC accelerator virtualization framework is proposed that incurs minimum overheads and requires no source-code modification of the workload. AVEC intercepts local calls to a GPU on a device and forwards them to an edge resource seamlessly. The feasibility of AVEC is demonstrated on a real-world application, namely OpenPose using the Caffe deep learning library. It is observed that on a lab-based experimental test-bed AVEC delivers up to 7.48x speedup despite communication overheads incurred due to data transfers.
AVEC:面向深度学习库的云边缘计算加速器虚拟化
边缘计算提供了独特的优势,可以利用位于网络边缘的资源上的计算能力来运行相对较弱的用户设备的工作负载。这是通过卸载计算密集型工作负载来实现的,例如从用户设备到边缘的深度学习。使用边缘可以减少应用程序的总体通信延迟,因为可以在更靠近用户设备上生成数据的地方处理工作负载,而不是将它们发送到地理位置较远的云。专用硬件加速器,如云边缘网络中可用的图形处理单元(gpu),可以增强从设备卸载到边缘的计算密集型工作负载的性能。实现这一点所需的基本方法是gpu虚拟化。因此,本文着手研究GPU加速器虚拟化的潜力,以提高云边缘环境中深度学习工作负载的性能。AVEC加速器虚拟化框架的开销最小,并且不需要修改工作负载的源代码。AVEC拦截对设备上GPU的本地调用,并将它们无缝地转发到边缘资源。AVEC的可行性在使用Caffe深度学习库的OpenPose实际应用中得到了验证。观察到,在基于实验室的实验测试平台上,尽管由于数据传输而产生的通信开销,AVEC提供了高达7.48倍的加速。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信