{"title":"Evaluation of EMVA Using the Instruction-Level Parallelism on Tegra X1","authors":"H. Tominaga, Asuka Nakamura, Y. Maekawa","doi":"10.1109/CANDARW.2018.00052","DOIUrl":null,"url":null,"abstract":"Generally, solving random-sparse equations requires a direct method such as the LU decomposition. This paper proposes a speed-up method based on the extended vectorized LU factorization (EMVA) method for solving random-sparse equations using the instruction-level parallelism of the CUDA GPU. It is known that EMVA on CUDA achieves high execution efficiency [1]. However, the overhead of calling the kernel of EMVA is not small because the EMVA method needs to call a new kernel each time the instruction level increases. This overhead becomes smaller when using an architecture that can switch smoothly between the CPU and GPU kernels, such as the Tegra X1 architectures. Therefore, the proposed method selects the execution architecture of each instruction level from CPU to GPU on the basis of the parallelism of its instruction level. Our evaluation result demonstrate that the proposed method achieves about x26.5 speedup compared to the existing EMVA method.","PeriodicalId":329439,"journal":{"name":"2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CANDARW.2018.00052","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Generally, solving random-sparse equations requires a direct method such as the LU decomposition. This paper proposes a speed-up method based on the extended vectorized LU factorization (EMVA) method for solving random-sparse equations using the instruction-level parallelism of the CUDA GPU. It is known that EMVA on CUDA achieves high execution efficiency [1]. However, the overhead of calling the kernel of EMVA is not small because the EMVA method needs to call a new kernel each time the instruction level increases. This overhead becomes smaller when using an architecture that can switch smoothly between the CPU and GPU kernels, such as the Tegra X1 architectures. Therefore, the proposed method selects the execution architecture of each instruction level from CPU to GPU on the basis of the parallelism of its instruction level. Our evaluation result demonstrate that the proposed method achieves about x26.5 speedup compared to the existing EMVA method.