Qingchang Han, Hailong Yang, Zhongzhi Luan, D. Qian
{"title":"Accelerating tile low-rank GEMM on sunway architecture: POSTER","authors":"Qingchang Han, Hailong Yang, Zhongzhi Luan, D. Qian","doi":"10.1145/3310273.3323425","DOIUrl":null,"url":null,"abstract":"Tile Low-Rank (TLR) GEMM can significantly reduce the amount of computation and memory footprint for matrix multiplication while preserving the same level of accuracy [1]. TLR-GEMM is based on the TLR data format, which is an efficient method to store large-scale sparse matrix. The large matrix is divided into several blocks also known as tile, and non-diagonal tile is compressed into the product of two tall and skinny matrices (in low-rank data format). TLR-GEMM performs the multiplication of TLR matrix A and B to obtain matrix C. TLR-GEMM can be implemented in batch mode, that is, multiple threads are started, and each thread applies the operations onto its corresponding tiles, including dense GEMM, SVD and QR decomposition. One research challenge in the field of TLR-GEMM is that modern high-performance processors often use diverse architectures, which requires adapting to the unique architecture features to achieve better performance.","PeriodicalId":431860,"journal":{"name":"Proceedings of the 16th ACM International Conference on Computing Frontiers","volume":"64 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 16th ACM International Conference on Computing Frontiers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3310273.3323425","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Tile Low-Rank (TLR) GEMM can significantly reduce the amount of computation and memory footprint for matrix multiplication while preserving the same level of accuracy [1]. TLR-GEMM is based on the TLR data format, which is an efficient method to store large-scale sparse matrix. The large matrix is divided into several blocks also known as tile, and non-diagonal tile is compressed into the product of two tall and skinny matrices (in low-rank data format). TLR-GEMM performs the multiplication of TLR matrix A and B to obtain matrix C. TLR-GEMM can be implemented in batch mode, that is, multiple threads are started, and each thread applies the operations onto its corresponding tiles, including dense GEMM, SVD and QR decomposition. One research challenge in the field of TLR-GEMM is that modern high-performance processors often use diverse architectures, which requires adapting to the unique architecture features to achieve better performance.