Yuanda Wang, Ye Xia, Youlin Zhang, D. Melissourgos, Olufemi O. Odegbile, Shigang Chen
{"title":"A Full Mirror Computation Model for Edge-Cloud Computing","authors":"Yuanda Wang, Ye Xia, Youlin Zhang, D. Melissourgos, Olufemi O. Odegbile, Shigang Chen","doi":"10.1145/3474124.3474142","DOIUrl":null,"url":null,"abstract":"Edge computing has been gaining momentum lately as a means to complement cloud computing for shorter response time, better user experience, and improved data security. Traditional approaches of edge-cloud computing take two major forms: One is to offload the computation from an edge device to the cloud so as to take advantage of the virtually unlimited resources in the cloud and reduce the computation time. The other is to move selected computation to the edge devices where data are produced, actions are performed and users are located. However, in practice, it is often difficult to split the computation tasks of an application and decide which tasks should be performed in the cloud and which at the edge. The reason is that, for the same computation, it may sometimes be beneficial to execute it in the cloud while other times at the edge, depending on run-time conditions such as the data size, the type of computation, and the communication delay, which all varies from time to time. This paper proposes a new edge-cloud computing model, called the full mirror model, which provides a generic method to circumvent the problem of dynamic decisions on the execution location. With a two-thread implementation mechanism, the new model is able to achieve an execution completion time approximately equal to the smaller one between cloud execution and edge execution, regardless of what run-time conditions are. We test the new model by modifying an existing program for network traffic analysis so that it runs at both the edge and the cloud in a coordinated fashion. The experimental results demonstrate that the proposed model outperforms edge-alone computing and cloud-alone computing in reducing the execution time.","PeriodicalId":144611,"journal":{"name":"2021 Thirteenth International Conference on Contemporary Computing (IC3-2021)","volume":"292 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Thirteenth International Conference on Contemporary Computing (IC3-2021)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3474124.3474142","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Edge computing has been gaining momentum lately as a means to complement cloud computing for shorter response time, better user experience, and improved data security. Traditional approaches of edge-cloud computing take two major forms: One is to offload the computation from an edge device to the cloud so as to take advantage of the virtually unlimited resources in the cloud and reduce the computation time. The other is to move selected computation to the edge devices where data are produced, actions are performed and users are located. However, in practice, it is often difficult to split the computation tasks of an application and decide which tasks should be performed in the cloud and which at the edge. The reason is that, for the same computation, it may sometimes be beneficial to execute it in the cloud while other times at the edge, depending on run-time conditions such as the data size, the type of computation, and the communication delay, which all varies from time to time. This paper proposes a new edge-cloud computing model, called the full mirror model, which provides a generic method to circumvent the problem of dynamic decisions on the execution location. With a two-thread implementation mechanism, the new model is able to achieve an execution completion time approximately equal to the smaller one between cloud execution and edge execution, regardless of what run-time conditions are. We test the new model by modifying an existing program for network traffic analysis so that it runs at both the edge and the cloud in a coordinated fashion. The experimental results demonstrate that the proposed model outperforms edge-alone computing and cloud-alone computing in reducing the execution time.