在边缘服务器切换过程中使用链式DNN模型提高应用程序的推理质量

Alex Xie, Yang Peng
{"title":"在边缘服务器切换过程中使用链式DNN模型提高应用程序的推理质量","authors":"Alex Xie, Yang Peng","doi":"10.1109/SEC54971.2022.00079","DOIUrl":null,"url":null,"abstract":"Recent advances in deep neural networks (DNNs) have greatly benefited mobile applications that perform real-time video analytics. However, mobile devices' computing power usually limits them from inferring complex DNN models timely. Edge intelligence has emerged to help mobile apps offload DNN inference tasks to powerful edge servers for accelerated inference services. One major challenge that edge intelligence faces is maintaining a satisfactory quality of service when users move across edge servers. To address this issue, we propose a novel solution to help improve the quality of inference services for real-time video analytics applications that use chained DNN models. This solution includes two schemes: one maximizes the use of mobile devices to improve inference quality during the handover between edge servers, and the other provides offloading decisions to minimize the end-to-end inference delay when edge servers are available. We evaluate the proposed scheme using a DNN-based realtime traffic monitoring application through testbed and simulation experiments. The results show that our solution can improve inference quality by 52% during handover compared to the greedy algorithm-based solution.","PeriodicalId":364062,"journal":{"name":"2022 IEEE/ACM 7th Symposium on Edge Computing (SEC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Improving the Quality of Inference for Applications using Chained DNN Models during Edge Server Handover\",\"authors\":\"Alex Xie, Yang Peng\",\"doi\":\"10.1109/SEC54971.2022.00079\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent advances in deep neural networks (DNNs) have greatly benefited mobile applications that perform real-time video analytics. However, mobile devices' computing power usually limits them from inferring complex DNN models timely. Edge intelligence has emerged to help mobile apps offload DNN inference tasks to powerful edge servers for accelerated inference services. One major challenge that edge intelligence faces is maintaining a satisfactory quality of service when users move across edge servers. To address this issue, we propose a novel solution to help improve the quality of inference services for real-time video analytics applications that use chained DNN models. This solution includes two schemes: one maximizes the use of mobile devices to improve inference quality during the handover between edge servers, and the other provides offloading decisions to minimize the end-to-end inference delay when edge servers are available. We evaluate the proposed scheme using a DNN-based realtime traffic monitoring application through testbed and simulation experiments. The results show that our solution can improve inference quality by 52% during handover compared to the greedy algorithm-based solution.\",\"PeriodicalId\":364062,\"journal\":{\"name\":\"2022 IEEE/ACM 7th Symposium on Edge Computing (SEC)\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE/ACM 7th Symposium on Edge Computing (SEC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SEC54971.2022.00079\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/ACM 7th Symposium on Edge Computing (SEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SEC54971.2022.00079","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

深度神经网络(dnn)的最新进展极大地有利于执行实时视频分析的移动应用程序。然而,移动设备的计算能力通常限制了它们及时推断复杂的DNN模型。边缘智能的出现是为了帮助移动应用程序将DNN推理任务卸载到功能强大的边缘服务器上,以加速推理服务。边缘智能面临的一个主要挑战是,当用户跨边缘服务器移动时,保持令人满意的服务质量。为了解决这个问题,我们提出了一个新的解决方案,以帮助提高使用链式DNN模型的实时视频分析应用程序的推理服务质量。该解决方案包括两种方案:一种方案最大限度地利用移动设备来提高边缘服务器之间切换期间的推理质量,另一种方案提供卸载决策,以在边缘服务器可用时最大限度地减少端到端推理延迟。我们利用一个基于dnn的实时交通监控应用,通过测试平台和仿真实验对所提出的方案进行了评估。结果表明,与基于贪心算法的方案相比,该方案在切换过程中的推理质量提高了52%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Improving the Quality of Inference for Applications using Chained DNN Models during Edge Server Handover
Recent advances in deep neural networks (DNNs) have greatly benefited mobile applications that perform real-time video analytics. However, mobile devices' computing power usually limits them from inferring complex DNN models timely. Edge intelligence has emerged to help mobile apps offload DNN inference tasks to powerful edge servers for accelerated inference services. One major challenge that edge intelligence faces is maintaining a satisfactory quality of service when users move across edge servers. To address this issue, we propose a novel solution to help improve the quality of inference services for real-time video analytics applications that use chained DNN models. This solution includes two schemes: one maximizes the use of mobile devices to improve inference quality during the handover between edge servers, and the other provides offloading decisions to minimize the end-to-end inference delay when edge servers are available. We evaluate the proposed scheme using a DNN-based realtime traffic monitoring application through testbed and simulation experiments. The results show that our solution can improve inference quality by 52% during handover compared to the greedy algorithm-based solution.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信