{"title":"Bamshad: A JIT compiler for running Java stream APIs on heterogeneous environments","authors":"Bahram Yarahmadi, F. Khunjush","doi":"10.1109/CADS.2017.8310734","DOIUrl":null,"url":null,"abstract":"Nowadays, Graphics Processing Units (GPUs) and other types of emerging accelerators have an important role in high-performance computing. These devices can be leveraged in a wide range of applications through using appropriate programming environments such as CUDA and OpenCL which lead to reaching high-performance applications. However, on one hand, programming GPUs is a painful and error-prone task and requires a great amount of expertise especially in low-level architectural features as well as their memory management in order to achieve reasonable performance. On the other hand, enabling running high-level programming languages such as Java with massive computational power of today's GPUs can lessen the burden of this complexity. Considering new features in Java 8 such as lambda functions which are used in Java parallel streams, supporting these new features is vital to use these devices in real applications. In this paper, we introduce a just-in-time compiler, named Bamshad, which ports lambda functions used in Java parallel streams to GPU at runtime. For this, a series of compiler techniques are adopted to transparently eliminate unnecessary data communication between CPUs and GPUs. With our approach, a programmer is not involved in the detailed process of tuning a GPU device for reducing the amount of communication. The experimental results show that the proposed JIT compiler yields 13× improvement in comparison to sequential Java execution for all benchmarks. Also, in comparison to parallel Java, our work yields 3.9× improvement.","PeriodicalId":321346,"journal":{"name":"2017 19th International Symposium on Computer Architecture and Digital Systems (CADS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 19th International Symposium on Computer Architecture and Digital Systems (CADS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CADS.2017.8310734","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Nowadays, Graphics Processing Units (GPUs) and other types of emerging accelerators have an important role in high-performance computing. These devices can be leveraged in a wide range of applications through using appropriate programming environments such as CUDA and OpenCL which lead to reaching high-performance applications. However, on one hand, programming GPUs is a painful and error-prone task and requires a great amount of expertise especially in low-level architectural features as well as their memory management in order to achieve reasonable performance. On the other hand, enabling running high-level programming languages such as Java with massive computational power of today's GPUs can lessen the burden of this complexity. Considering new features in Java 8 such as lambda functions which are used in Java parallel streams, supporting these new features is vital to use these devices in real applications. In this paper, we introduce a just-in-time compiler, named Bamshad, which ports lambda functions used in Java parallel streams to GPU at runtime. For this, a series of compiler techniques are adopted to transparently eliminate unnecessary data communication between CPUs and GPUs. With our approach, a programmer is not involved in the detailed process of tuning a GPU device for reducing the amount of communication. The experimental results show that the proposed JIT compiler yields 13× improvement in comparison to sequential Java execution for all benchmarks. Also, in comparison to parallel Java, our work yields 3.9× improvement.