{"title":"矩阵乘法近似双线性计算的随机化","authors":"Osman Asif Malik, Stephen Becker","doi":"10.1080/23799927.2020.1861104","DOIUrl":null,"url":null,"abstract":"ABSTRACT We present a method for randomizing formulas for bilinear computation of matrix products which does not increase the leading order complexity of the computation. We consider the implications of such randomization when there are two sources of error. The first source is due to the computation formula itself only being approximately correct. Such formulas come up when numerically searching for faster matrix multiplication algorithms. The second source is due to using floating point arithmetic. This kind of error is especially important when computing on low precision hardware like GPUs. Our theoretical results and numerical experiments indicate that our method can improve performance when the two kinds of error are present individually, as well as when they are present at the same time.","PeriodicalId":37216,"journal":{"name":"International Journal of Computer Mathematics: Computer Systems Theory","volume":null,"pages":null},"PeriodicalIF":0.9000,"publicationDate":"2019-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Randomization of approximate bilinear computation for matrix multiplication\",\"authors\":\"Osman Asif Malik, Stephen Becker\",\"doi\":\"10.1080/23799927.2020.1861104\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT We present a method for randomizing formulas for bilinear computation of matrix products which does not increase the leading order complexity of the computation. We consider the implications of such randomization when there are two sources of error. The first source is due to the computation formula itself only being approximately correct. Such formulas come up when numerically searching for faster matrix multiplication algorithms. The second source is due to using floating point arithmetic. This kind of error is especially important when computing on low precision hardware like GPUs. Our theoretical results and numerical experiments indicate that our method can improve performance when the two kinds of error are present individually, as well as when they are present at the same time.\",\"PeriodicalId\":37216,\"journal\":{\"name\":\"International Journal of Computer Mathematics: Computer Systems Theory\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.9000,\"publicationDate\":\"2019-05-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Computer Mathematics: Computer Systems Theory\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/23799927.2020.1861104\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Mathematics: Computer Systems Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/23799927.2020.1861104","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
Randomization of approximate bilinear computation for matrix multiplication
ABSTRACT We present a method for randomizing formulas for bilinear computation of matrix products which does not increase the leading order complexity of the computation. We consider the implications of such randomization when there are two sources of error. The first source is due to the computation formula itself only being approximately correct. Such formulas come up when numerically searching for faster matrix multiplication algorithms. The second source is due to using floating point arithmetic. This kind of error is especially important when computing on low precision hardware like GPUs. Our theoretical results and numerical experiments indicate that our method can improve performance when the two kinds of error are present individually, as well as when they are present at the same time.