Online Learning Algorithms for Offloading Augmented Reality Requests with Uncertain Demands in MECs

Zichuan Xu, Dongqi Liu, W. Liang, Wenzheng Xu, Haipeng Dai, Qiufen Xia, Pan Zhou
{"title":"Online Learning Algorithms for Offloading Augmented Reality Requests with Uncertain Demands in MECs","authors":"Zichuan Xu, Dongqi Liu, W. Liang, Wenzheng Xu, Haipeng Dai, Qiufen Xia, Pan Zhou","doi":"10.1109/ICDCS51616.2021.00105","DOIUrl":null,"url":null,"abstract":"Augmented Reality (AR) has various practical applications in healthcare, education, and entertainment. To provide a fully interactive and immersive experience, AR applications require extremely high responsiveness and ultra-low processing latency. Mobile edge computing (MEC) has shown great potential in meeting such stringent requirements and demands of AR applications by implementing AR requests in edge servers within the close proximity of these applications. In this paper, we investigate the problem of reward maximization for AR applications with uncertain demands in an MEC network, such that the reward of provisioning services for AR applications is maximized and the responsiveness of AR applications is enhanced, subject to both network resource capacity. We devise an exact solution for the problem if the problem size is small, otherwise we develop an efficient approximation algorithm with a provable approximation ratio for the problem. We also devise an online learning algorithm with a bounded regret for the dynamic reward maximization problem without the knowledge of the future arrivals of AR requests, by adopting the technique of Multi-Armed Bandits (MAB). We evaluate the performance of the proposed algorithms through simulations. Experimental results show that the proposed algorithms outperform existing studies by 17 % higher reward.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDCS51616.2021.00105","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Augmented Reality (AR) has various practical applications in healthcare, education, and entertainment. To provide a fully interactive and immersive experience, AR applications require extremely high responsiveness and ultra-low processing latency. Mobile edge computing (MEC) has shown great potential in meeting such stringent requirements and demands of AR applications by implementing AR requests in edge servers within the close proximity of these applications. In this paper, we investigate the problem of reward maximization for AR applications with uncertain demands in an MEC network, such that the reward of provisioning services for AR applications is maximized and the responsiveness of AR applications is enhanced, subject to both network resource capacity. We devise an exact solution for the problem if the problem size is small, otherwise we develop an efficient approximation algorithm with a provable approximation ratio for the problem. We also devise an online learning algorithm with a bounded regret for the dynamic reward maximization problem without the knowledge of the future arrivals of AR requests, by adopting the technique of Multi-Armed Bandits (MAB). We evaluate the performance of the proposed algorithms through simulations. Experimental results show that the proposed algorithms outperform existing studies by 17 % higher reward.
mec中具有不确定需求的增强现实请求卸载在线学习算法
增强现实(AR)在医疗保健、教育和娱乐领域有各种实际应用。为了提供完全交互式的沉浸式体验,AR应用程序需要极高的响应速度和超低的处理延迟。移动边缘计算(MEC)通过在这些应用程序附近的边缘服务器中实现AR请求,在满足AR应用程序的严格要求和需求方面显示出巨大的潜力。本文研究了MEC网络中需求不确定的AR应用的报酬最大化问题,使AR应用提供服务的报酬最大化,增强AR应用的响应能力,同时受网络资源容量的限制。如果问题规模很小,我们设计了一个问题的精确解,否则我们开发了一个有效的近似算法,具有可证明的近似比。在不知道AR请求未来到达的情况下,采用多武装强盗(Multi-Armed Bandits, MAB)技术,设计了一种具有有限遗憾的在线学习算法来解决动态奖励最大化问题。我们通过仿真来评估所提出算法的性能。实验结果表明,所提出的算法比现有的研究结果高出17%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信