{"title":"Task Scheduling Strategies for Batched Basic Linear Algebra Subprograms on Many-core CPUs","authors":"Daichi Mukunoki, Yusuke Hirota, Toshiyuki Imamura","doi":"10.1109/MCSoC51149.2021.00042","DOIUrl":null,"url":null,"abstract":"Batched Basic Linear Algebra Subprograms (BLAS) provides an interface that allows multiple problems for a given BLAS routine (operation) - with different parameters and sizes independent of each other - to be computed in a single routine. The efficient use of cores on many-core processors has been introduced for computing multiple minor problems for which sufficient parallelism cannot be extracted from a single problem. The major goal of this study is to automatically generate high-performance batched routines for all BLAS routines using nonbatched BLAS implementation and OpenMP on CPUs. Furthermore, the primary challenge is the task scheduling method for allocating batches to cores. In this study, we propose a scheduling method based on a greedy algorithm, which allocates batches based on their costs in advance to eliminate load imbalance when the costs of batches vary. Then, we investigate the performance of five scheduling methods, including ones implemented in OpenMP and our proposed method, on matrix multiplication (GEMM) and matrix-vector multiplication (GEMV) under several conditions and environments. As a result, we found that the optimal scheduling strategy differs depending on the problem setting and environment. Based on this result, we propose an automatic generation scheme of batched BLAS from nonbatched BLAS that can introduce arbitrary task scheduling. This scheme facilitates the development of batched routines for a full set of BLAS routines and special BLAS implementations such as high-precision versions.","PeriodicalId":166811,"journal":{"name":"2021 IEEE 14th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 14th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MCSoC51149.2021.00042","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Batched Basic Linear Algebra Subprograms (BLAS) provides an interface that allows multiple problems for a given BLAS routine (operation) - with different parameters and sizes independent of each other - to be computed in a single routine. The efficient use of cores on many-core processors has been introduced for computing multiple minor problems for which sufficient parallelism cannot be extracted from a single problem. The major goal of this study is to automatically generate high-performance batched routines for all BLAS routines using nonbatched BLAS implementation and OpenMP on CPUs. Furthermore, the primary challenge is the task scheduling method for allocating batches to cores. In this study, we propose a scheduling method based on a greedy algorithm, which allocates batches based on their costs in advance to eliminate load imbalance when the costs of batches vary. Then, we investigate the performance of five scheduling methods, including ones implemented in OpenMP and our proposed method, on matrix multiplication (GEMM) and matrix-vector multiplication (GEMV) under several conditions and environments. As a result, we found that the optimal scheduling strategy differs depending on the problem setting and environment. Based on this result, we propose an automatic generation scheme of batched BLAS from nonbatched BLAS that can introduce arbitrary task scheduling. This scheme facilitates the development of batched routines for a full set of BLAS routines and special BLAS implementations such as high-precision versions.