移动边缘计算中的用户分配:一种深度强化学习方法

Subrat Prasad Panda, A. Banerjee, A. Bhattacharya
{"title":"移动边缘计算中的用户分配:一种深度强化学习方法","authors":"Subrat Prasad Panda, A. Banerjee, A. Bhattacharya","doi":"10.1109/ICWS53863.2021.00064","DOIUrl":null,"url":null,"abstract":"In recent times, the need for low latency has made it necessary to deploy application services physically and logically close to the users rather than using the cloud for hosting services. This paradigm of computing, known as edge or fog computing, is becoming increasingly popular. An edge user allocation policy determines how to allocate service requests from mobile users to MEC servers. Current state-of-the-art techniques assume that the total resource utilization on an edge server is equal to the sum of the individual resource utilizations of services provisioned from the edge server. However, the relationship between resources utilized on an edge server with the number of service requests served from there is usually highly non-linear, hence, mathematically modelling the resource utilization is challenging. This is especially true in case of an environment with CPU-GPU co-execution, as commonly observed in modern edge computing. In this work, we provide an on-device Deep Reinforcement Learning (DRL) framework to predict the resource utilization of incoming service requests from users, thereby estimating the number of users an edge server can accommodate for a given latency threshold. We further propose an algorithm to obtain the user allocation policy. We compare the performance of the proposed DRL framework with traditional allocation approaches and show that the DRL framework outperforms deterministic approaches by at least 10% in terms of the number of users allocated.","PeriodicalId":213320,"journal":{"name":"2021 IEEE International Conference on Web Services (ICWS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"User Allocation in Mobile Edge Computing: A Deep Reinforcement Learning Approach\",\"authors\":\"Subrat Prasad Panda, A. Banerjee, A. Bhattacharya\",\"doi\":\"10.1109/ICWS53863.2021.00064\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent times, the need for low latency has made it necessary to deploy application services physically and logically close to the users rather than using the cloud for hosting services. This paradigm of computing, known as edge or fog computing, is becoming increasingly popular. An edge user allocation policy determines how to allocate service requests from mobile users to MEC servers. Current state-of-the-art techniques assume that the total resource utilization on an edge server is equal to the sum of the individual resource utilizations of services provisioned from the edge server. However, the relationship between resources utilized on an edge server with the number of service requests served from there is usually highly non-linear, hence, mathematically modelling the resource utilization is challenging. This is especially true in case of an environment with CPU-GPU co-execution, as commonly observed in modern edge computing. In this work, we provide an on-device Deep Reinforcement Learning (DRL) framework to predict the resource utilization of incoming service requests from users, thereby estimating the number of users an edge server can accommodate for a given latency threshold. We further propose an algorithm to obtain the user allocation policy. We compare the performance of the proposed DRL framework with traditional allocation approaches and show that the DRL framework outperforms deterministic approaches by at least 10% in terms of the number of users allocated.\",\"PeriodicalId\":213320,\"journal\":{\"name\":\"2021 IEEE International Conference on Web Services (ICWS)\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Web Services (ICWS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICWS53863.2021.00064\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Web Services (ICWS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICWS53863.2021.00064","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

摘要

最近,对低延迟的需求使得有必要在物理上和逻辑上将应用程序服务部署在靠近用户的地方,而不是使用云来托管服务。这种计算范式,称为边缘计算或雾计算,正变得越来越流行。边缘用户分配策略决定如何将移动用户的业务请求分配给MEC服务器。当前最先进的技术假设边缘服务器上的总资源利用率等于从边缘服务器提供的服务的单个资源利用率的总和。然而,边缘服务器上使用的资源与从那里提供的服务请求数量之间的关系通常是高度非线性的,因此,对资源利用进行数学建模是具有挑战性的。在CPU-GPU协同执行的环境中尤其如此,正如在现代边缘计算中经常观察到的那样。在这项工作中,我们提供了一个设备上的深度强化学习(DRL)框架来预测来自用户的传入服务请求的资源利用率,从而估计边缘服务器可以容纳给定延迟阈值的用户数量。我们进一步提出了一种获取用户分配策略的算法。我们将所提出的DRL框架的性能与传统的分配方法进行了比较,结果表明,就分配的用户数量而言,DRL框架的性能至少优于确定性方法10%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
User Allocation in Mobile Edge Computing: A Deep Reinforcement Learning Approach
In recent times, the need for low latency has made it necessary to deploy application services physically and logically close to the users rather than using the cloud for hosting services. This paradigm of computing, known as edge or fog computing, is becoming increasingly popular. An edge user allocation policy determines how to allocate service requests from mobile users to MEC servers. Current state-of-the-art techniques assume that the total resource utilization on an edge server is equal to the sum of the individual resource utilizations of services provisioned from the edge server. However, the relationship between resources utilized on an edge server with the number of service requests served from there is usually highly non-linear, hence, mathematically modelling the resource utilization is challenging. This is especially true in case of an environment with CPU-GPU co-execution, as commonly observed in modern edge computing. In this work, we provide an on-device Deep Reinforcement Learning (DRL) framework to predict the resource utilization of incoming service requests from users, thereby estimating the number of users an edge server can accommodate for a given latency threshold. We further propose an algorithm to obtain the user allocation policy. We compare the performance of the proposed DRL framework with traditional allocation approaches and show that the DRL framework outperforms deterministic approaches by at least 10% in terms of the number of users allocated.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信