Kenya Yamada, T. Katagiri, H. Takizawa, K. Minami, M. Yokokawa, Toru Nagai, M. Ogino
{"title":"Preconditioner Auto-Tuning Using Deep Learning for Sparse Iterative Algorithms","authors":"Kenya Yamada, T. Katagiri, H. Takizawa, K. Minami, M. Yokokawa, Toru Nagai, M. Ogino","doi":"10.1109/CANDARW.2018.00055","DOIUrl":null,"url":null,"abstract":"In numerical libraries for sparse matrix operations, there are many tuning parameters related to implementation selection. Selection of different tuning parameters could result in totally different performance. Moreover, optimal implementation depends on the sparse matrices to be operated. It is difficult to find optimal implementation without executing each implementation and thereby examining its performance on a given sparse matrix. In this study, we propose an implementation selection method for sparse iterative algorithms and preconditioners in a numerical library using deep learning. The proposed method uses full color images to represent the features of a sparse matrix. We present an image generation method for partitioning a given matrix (to generate its feature image) so that the value of each matrix element is considered in the implementation selection. We then evaluate the effectiveness of the proposed method by conducting a numerical experiment. In this experiment, the accuracy of implementation selection is evaluated. The training data comprise a pair of sparse matrix and its optimal implementation. The optimal implementation of each sparse matrix in the training data is obtained in advance by executing every implementation and getting the best one. The experimental results obtained using the proposed method show that the accuracy of selecting the optimal implementation of each sparse matrix is 79.5%.","PeriodicalId":329439,"journal":{"name":"2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CANDARW.2018.00055","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
In numerical libraries for sparse matrix operations, there are many tuning parameters related to implementation selection. Selection of different tuning parameters could result in totally different performance. Moreover, optimal implementation depends on the sparse matrices to be operated. It is difficult to find optimal implementation without executing each implementation and thereby examining its performance on a given sparse matrix. In this study, we propose an implementation selection method for sparse iterative algorithms and preconditioners in a numerical library using deep learning. The proposed method uses full color images to represent the features of a sparse matrix. We present an image generation method for partitioning a given matrix (to generate its feature image) so that the value of each matrix element is considered in the implementation selection. We then evaluate the effectiveness of the proposed method by conducting a numerical experiment. In this experiment, the accuracy of implementation selection is evaluated. The training data comprise a pair of sparse matrix and its optimal implementation. The optimal implementation of each sparse matrix in the training data is obtained in advance by executing every implementation and getting the best one. The experimental results obtained using the proposed method show that the accuracy of selecting the optimal implementation of each sparse matrix is 79.5%.