Hussam Al DaasSTFC, Scientific Computing Department, Rutherford Appleton Laboratory, Didcot, UK, Grey BallardWake Forest University, Computer Science Department, Winston-Salem, NC, USA, Laura GrigoriEPFL, Institute of Mathematics, Lausanne, Switzerland and PSI, Center for Scientific Computing, Theory and Data, Villigen, Switzerland, Suraj KumarInstitut national de recherche en sciences et technologies du numérique, Lyon, France, Kathryn RouseInmar Intelligence, Winston-Salem, NC, USA, Mathieu VeriteEPFL, Institute of Mathematics, Lausanne, Switzerland
{"title":"Communication Lower Bounds and Optimal Algorithms for Symmetric Matrix Computations","authors":"Hussam Al DaasSTFC, Scientific Computing Department, Rutherford Appleton Laboratory, Didcot, UK, Grey BallardWake Forest University, Computer Science Department, Winston-Salem, NC, USA, Laura GrigoriEPFL, Institute of Mathematics, Lausanne, Switzerland and PSI, Center for Scientific Computing, Theory and Data, Villigen, Switzerland, Suraj KumarInstitut national de recherche en sciences et technologies du numérique, Lyon, France, Kathryn RouseInmar Intelligence, Winston-Salem, NC, USA, Mathieu VeriteEPFL, Institute of Mathematics, Lausanne, Switzerland","doi":"arxiv-2409.11304","DOIUrl":null,"url":null,"abstract":"In this article, we focus on the communication costs of three symmetric\nmatrix computations: i) multiplying a matrix with its transpose, known as a\nsymmetric rank-k update (SYRK) ii) adding the result of the multiplication of a\nmatrix with the transpose of another matrix and the transpose of that result,\nknown as a symmetric rank-2k update (SYR2K) iii) performing matrix\nmultiplication with a symmetric input matrix (SYMM). All three computations\nappear in the Level 3 Basic Linear Algebra Subroutines (BLAS) and have wide use\nin applications involving symmetric matrices. We establish communication lower\nbounds for these kernels using sequential and distributed-memory parallel\ncomputational models, and we show that our bounds are tight by presenting\ncommunication-optimal algorithms for each setting. Our lower bound proofs rely\non applying a geometric inequality for symmetric computations and analytically\nsolving constrained nonlinear optimization problems. The symmetric matrix and\nits corresponding computations are accessed and performed according to a\ntriangular block partitioning scheme in the optimal algorithms.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"7 5 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11304","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this article, we focus on the communication costs of three symmetric
matrix computations: i) multiplying a matrix with its transpose, known as a
symmetric rank-k update (SYRK) ii) adding the result of the multiplication of a
matrix with the transpose of another matrix and the transpose of that result,
known as a symmetric rank-2k update (SYR2K) iii) performing matrix
multiplication with a symmetric input matrix (SYMM). All three computations
appear in the Level 3 Basic Linear Algebra Subroutines (BLAS) and have wide use
in applications involving symmetric matrices. We establish communication lower
bounds for these kernels using sequential and distributed-memory parallel
computational models, and we show that our bounds are tight by presenting
communication-optimal algorithms for each setting. Our lower bound proofs rely
on applying a geometric inequality for symmetric computations and analytically
solving constrained nonlinear optimization problems. The symmetric matrix and
its corresponding computations are accessed and performed according to a
triangular block partitioning scheme in the optimal algorithms.