{"title":"Multigrid Methods Using Block Floating Point Arithmetic","authors":"Nils Kohl, Stephen F. McCormick, Rasmus Tamstorf","doi":"10.1137/23m1581819","DOIUrl":null,"url":null,"abstract":"SIAM Journal on Scientific Computing, Ahead of Print. <br/> Abstract. Block floating point (BFP) arithmetic is currently seeing a resurgence in interest because it requires less power and less chip area and is less complicated to implement in hardware than standard floating point arithmetic. This paper explores the application of BFP to mixed- and progressive-precision multigrid methods, enabling the solution of linear elliptic partial differential equations (PDEs) in energy- and hardware-efficient integer arithmetic. While most existing applications of BFP arithmetic tend to use small block sizes, the block size here is chosen to be maximal such that matrices and vectors share a single exponent for all entries. This is sometimes also referred to as a scaled fixed point format. We provide algorithms for BLAS-like routines for BFP arithmetic that ensure exact vector-vector and matrix-vector computations up to a specified precision. Using these algorithms, we study the asymptotic precision requirements for achieving discretization-error-accuracy. We demonstrate that some computations can be performed using only 4-bit integers, while the number of bits required to attain a certain target accuracy is similar to that of standard floating point arithmetic. Finally, we present a heuristic for full multigrid in BFP arithmetic based on saturation and truncation that still achieves discretization-error-accuracy without the need for expensive normalization steps of intermediate results.","PeriodicalId":3,"journal":{"name":"ACS Applied Electronic Materials","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Electronic Materials","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1137/23m1581819","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
SIAM Journal on Scientific Computing, Ahead of Print. Abstract. Block floating point (BFP) arithmetic is currently seeing a resurgence in interest because it requires less power and less chip area and is less complicated to implement in hardware than standard floating point arithmetic. This paper explores the application of BFP to mixed- and progressive-precision multigrid methods, enabling the solution of linear elliptic partial differential equations (PDEs) in energy- and hardware-efficient integer arithmetic. While most existing applications of BFP arithmetic tend to use small block sizes, the block size here is chosen to be maximal such that matrices and vectors share a single exponent for all entries. This is sometimes also referred to as a scaled fixed point format. We provide algorithms for BLAS-like routines for BFP arithmetic that ensure exact vector-vector and matrix-vector computations up to a specified precision. Using these algorithms, we study the asymptotic precision requirements for achieving discretization-error-accuracy. We demonstrate that some computations can be performed using only 4-bit integers, while the number of bits required to attain a certain target accuracy is similar to that of standard floating point arithmetic. Finally, we present a heuristic for full multigrid in BFP arithmetic based on saturation and truncation that still achieves discretization-error-accuracy without the need for expensive normalization steps of intermediate results.