{"title":"Towards a Scalable and QoS-Aware Load Balancing Platform for Edge Computing Environments","authors":"Charafeddine Mechalikh, Hajer Taktak, Faouzi Moussa","doi":"10.1109/HPCS48598.2019.9188159","DOIUrl":null,"url":null,"abstract":"Edge computing is a new computing paradigm that brings the cloud applications close to the Internet of Things (IoT) devices at the edge of the network. It improves the resources utilization efficiency by using the resources already available at the edge of the network [8]. As a result, it decreases the cloud workload, reduces the latency, and enables a new breed of latency-sensitive applications such as the connected vehicles. Horizontal scalability is another advantage of edge computing. Unlike the cloud and fog computing, the latter takes advantages of the growing number of connected devices, as this growth results in increasing the number of the available resources. Most researches in this field were only interested in finding the optimal tasks offloading destination by minimizing the latency, the resources utilization, and the energy consumption. Therefore, they ignore the effect of the synchronization between the devices, and the applications (i.e. containers) deployment delay. Motivated by the advantages of edge computing, in this paper, we introduce a load balancing platform for IoT-edge computing environments. As opposed to the current trend, we will first focus on the applications deployment and the synchronization between devices in order to provide better scalability, enable a self-manageable IoT network, and meet the quality of service (QoS). According to the simulation results, the proposed approach provides better scalability; it reduces the network utilization and the cloud workload. In addition, it provides better applications deployment delays and a lower latency.","PeriodicalId":371856,"journal":{"name":"2019 International Conference on High Performance Computing & Simulation (HPCS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on High Performance Computing & Simulation (HPCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPCS48598.2019.9188159","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Edge computing is a new computing paradigm that brings the cloud applications close to the Internet of Things (IoT) devices at the edge of the network. It improves the resources utilization efficiency by using the resources already available at the edge of the network [8]. As a result, it decreases the cloud workload, reduces the latency, and enables a new breed of latency-sensitive applications such as the connected vehicles. Horizontal scalability is another advantage of edge computing. Unlike the cloud and fog computing, the latter takes advantages of the growing number of connected devices, as this growth results in increasing the number of the available resources. Most researches in this field were only interested in finding the optimal tasks offloading destination by minimizing the latency, the resources utilization, and the energy consumption. Therefore, they ignore the effect of the synchronization between the devices, and the applications (i.e. containers) deployment delay. Motivated by the advantages of edge computing, in this paper, we introduce a load balancing platform for IoT-edge computing environments. As opposed to the current trend, we will first focus on the applications deployment and the synchronization between devices in order to provide better scalability, enable a self-manageable IoT network, and meet the quality of service (QoS). According to the simulation results, the proposed approach provides better scalability; it reduces the network utilization and the cloud workload. In addition, it provides better applications deployment delays and a lower latency.