Mieszko Ferens, Diego Hortelano, I. de Miguel, Ramón J. Durán Barroso, J. Aguado, L. Ruiz, N. Merayo, P. Fernández, R. Lorenzo, E. Abril
{"title":"Deep Reinforcement Learning Applied to Computation Offloading of Vehicular Applications: A Comparison","authors":"Mieszko Ferens, Diego Hortelano, I. de Miguel, Ramón J. Durán Barroso, J. Aguado, L. Ruiz, N. Merayo, P. Fernández, R. Lorenzo, E. Abril","doi":"10.1109/BalkanCom55633.2022.9900545","DOIUrl":null,"url":null,"abstract":"An observable trend in recent years is the increasing demand for more complex services designed to be used with portable or automotive embedded devices. The problem is that these devices may lack the computational resources necessary to comply with service requirements. To solve it, cloud and edge computing, and in particular, the recent multi-access edge computing (MEC) paradigm, have been proposed. By offloading the processing of computational tasks from devices or vehicles to an external network, a larger amount of computational resources, placed in different locations, becomes accessible. However, this in turn creates the issue of deciding where each task should be executed. In this paper, we model the problem of computation offloading of vehicular applications to solve it using deep reinforcement learning (DRL) and evaluate the performance of different DRL algorithms and heuristics, showing the advantages of the former methods. Moreover, the impact of two scheduling techniques in computing nodes and two reward strategies in the DRL methods are also analyzed and discussed.","PeriodicalId":114443,"journal":{"name":"2022 International Balkan Conference on Communications and Networking (BalkanCom)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Balkan Conference on Communications and Networking (BalkanCom)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BalkanCom55633.2022.9900545","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
An observable trend in recent years is the increasing demand for more complex services designed to be used with portable or automotive embedded devices. The problem is that these devices may lack the computational resources necessary to comply with service requirements. To solve it, cloud and edge computing, and in particular, the recent multi-access edge computing (MEC) paradigm, have been proposed. By offloading the processing of computational tasks from devices or vehicles to an external network, a larger amount of computational resources, placed in different locations, becomes accessible. However, this in turn creates the issue of deciding where each task should be executed. In this paper, we model the problem of computation offloading of vehicular applications to solve it using deep reinforcement learning (DRL) and evaluate the performance of different DRL algorithms and heuristics, showing the advantages of the former methods. Moreover, the impact of two scheduling techniques in computing nodes and two reward strategies in the DRL methods are also analyzed and discussed.