{"title":"Portable parallel Level-3 BLAS in Linda","authors":"B. Ghosh, M. Schultz","doi":"10.1109/SHPCC.1992.232664","DOIUrl":null,"url":null,"abstract":"Describes an approach towards providing an efficient Level-3 BLAS library over a variety of parallel architectures using C-Linda. A blocked linear algebra program calling the sequential Level-3 BLAS can now run on both shared and distributed memory environments (which support Linda) by simply replacing each call by a call to the corresponding parallel Linda Level-3 BLAS. The authors summarise some of the implementation and algorithmic issues related to the matrix multiplication subroutine. All the various matrix algorithms being block-structured, they are particularly interested in parallel computers with hierarchical memory systems. Experimental data for their implementations show substantial speedups on shared memory, disjoint memory and networked configurations of processors. The authors also present the use of their parallel subroutines in blocked dense LU decomposition and present some preliminary experimental data.<<ETX>>","PeriodicalId":254515,"journal":{"name":"Proceedings Scalable High Performance Computing Conference SHPCC-92.","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1992-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Scalable High Performance Computing Conference SHPCC-92.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SHPCC.1992.232664","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Describes an approach towards providing an efficient Level-3 BLAS library over a variety of parallel architectures using C-Linda. A blocked linear algebra program calling the sequential Level-3 BLAS can now run on both shared and distributed memory environments (which support Linda) by simply replacing each call by a call to the corresponding parallel Linda Level-3 BLAS. The authors summarise some of the implementation and algorithmic issues related to the matrix multiplication subroutine. All the various matrix algorithms being block-structured, they are particularly interested in parallel computers with hierarchical memory systems. Experimental data for their implementations show substantial speedups on shared memory, disjoint memory and networked configurations of processors. The authors also present the use of their parallel subroutines in blocked dense LU decomposition and present some preliminary experimental data.<>