{"title":"gpu在异构边缘环境中的加速任务执行","authors":"Dominik Schäfer, Janick Edinger, C. Becker","doi":"10.1109/ICCCN.2018.8487451","DOIUrl":null,"url":null,"abstract":"In edge computing systems, computation is rather offloaded to nearby resources than to the cloud, due to latency reasons. However, the performance demand in the edge grows steadily, which makes nearby resources insufficient for many applications. Additionally, the amount of parallel tasks in the edge increases, based on trends like machine learning, Internet of Things, and artificial intelligence. This introduces a trade- off between the performance of the cloud and the communication latency of the edge. However, many edge devices have powerful co-processors in form of their graphics-processing unit (GPU), which are mostly unused. These processing units have specialized parallel architectures, which are different from standard CPUs and complex to use. In this paper, we present GPU-accelerated task execution for edge computing environments. The paper has four contributions. First, we design and implement a GPU system extension for our Tasklet system - a distributed computing system, which supports edge- and cloud-based task offloading. Second, we introduce a computational abstraction for GPUs in form of a virtual machine, which exploits parallelism while considering device heterogeneity and maintaining unobtrusiveness. Third, we offer an easy-to-use programming interface for the rather complex architecture of GPUs. Fourth, we evaluate our prototype in a real- world testbed and compare the GPU performance to standard edge resources.","PeriodicalId":399145,"journal":{"name":"2018 27th International Conference on Computer Communication and Networks (ICCCN)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"GPU-Accelerated Task Execution in Heterogeneous Edge Environments\",\"authors\":\"Dominik Schäfer, Janick Edinger, C. Becker\",\"doi\":\"10.1109/ICCCN.2018.8487451\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In edge computing systems, computation is rather offloaded to nearby resources than to the cloud, due to latency reasons. However, the performance demand in the edge grows steadily, which makes nearby resources insufficient for many applications. Additionally, the amount of parallel tasks in the edge increases, based on trends like machine learning, Internet of Things, and artificial intelligence. This introduces a trade- off between the performance of the cloud and the communication latency of the edge. However, many edge devices have powerful co-processors in form of their graphics-processing unit (GPU), which are mostly unused. These processing units have specialized parallel architectures, which are different from standard CPUs and complex to use. In this paper, we present GPU-accelerated task execution for edge computing environments. The paper has four contributions. First, we design and implement a GPU system extension for our Tasklet system - a distributed computing system, which supports edge- and cloud-based task offloading. Second, we introduce a computational abstraction for GPUs in form of a virtual machine, which exploits parallelism while considering device heterogeneity and maintaining unobtrusiveness. Third, we offer an easy-to-use programming interface for the rather complex architecture of GPUs. Fourth, we evaluate our prototype in a real- world testbed and compare the GPU performance to standard edge resources.\",\"PeriodicalId\":399145,\"journal\":{\"name\":\"2018 27th International Conference on Computer Communication and Networks (ICCCN)\",\"volume\":\"79 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 27th International Conference on Computer Communication and Networks (ICCCN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCCN.2018.8487451\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 27th International Conference on Computer Communication and Networks (ICCCN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCN.2018.8487451","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
GPU-Accelerated Task Execution in Heterogeneous Edge Environments
In edge computing systems, computation is rather offloaded to nearby resources than to the cloud, due to latency reasons. However, the performance demand in the edge grows steadily, which makes nearby resources insufficient for many applications. Additionally, the amount of parallel tasks in the edge increases, based on trends like machine learning, Internet of Things, and artificial intelligence. This introduces a trade- off between the performance of the cloud and the communication latency of the edge. However, many edge devices have powerful co-processors in form of their graphics-processing unit (GPU), which are mostly unused. These processing units have specialized parallel architectures, which are different from standard CPUs and complex to use. In this paper, we present GPU-accelerated task execution for edge computing environments. The paper has four contributions. First, we design and implement a GPU system extension for our Tasklet system - a distributed computing system, which supports edge- and cloud-based task offloading. Second, we introduce a computational abstraction for GPUs in form of a virtual machine, which exploits parallelism while considering device heterogeneity and maintaining unobtrusiveness. Third, we offer an easy-to-use programming interface for the rather complex architecture of GPUs. Fourth, we evaluate our prototype in a real- world testbed and compare the GPU performance to standard edge resources.