{"title":"Transparent Avoidance of Redundant Data Transfer on GPU-enabled Apache Spark","authors":"Ryo Asai, M. Okita, Fumihiko Ino, K. Hagihara","doi":"10.1145/3180270.3180276","DOIUrl":null,"url":null,"abstract":"This paper presents an extension to IBMSparkGPU, which is an Apache Spark framework capable of compute- or memory-intensive tasks on a graphics processing unit (GPU). The key contribution of this extension is an automated runtime that implicitly avoids redundant CPU-GPU data transfers without code modification. To realize this transparent capability, the runtime analyzes data dependencies of the target Spark code dynamically; thus, intermediate data on GPU can be cached, reused, and replaced appropriately to achieve acceleration. Experimental results demonstrate that the proposed runtime accelerates a machine learning application by a factor of 1.3. We expect that the proposed transparent runtime will be useful for accelerating IBMSparkGPU applications, which typically include a chain of GPU-offloaded tasks.","PeriodicalId":274320,"journal":{"name":"Proceedings of the 11th Workshop on General Purpose GPUs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 11th Workshop on General Purpose GPUs","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3180270.3180276","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
This paper presents an extension to IBMSparkGPU, which is an Apache Spark framework capable of compute- or memory-intensive tasks on a graphics processing unit (GPU). The key contribution of this extension is an automated runtime that implicitly avoids redundant CPU-GPU data transfers without code modification. To realize this transparent capability, the runtime analyzes data dependencies of the target Spark code dynamically; thus, intermediate data on GPU can be cached, reused, and replaced appropriately to achieve acceleration. Experimental results demonstrate that the proposed runtime accelerates a machine learning application by a factor of 1.3. We expect that the proposed transparent runtime will be useful for accelerating IBMSparkGPU applications, which typically include a chain of GPU-offloaded tasks.