Mary Lai O. Salvana, Sameh Abdulah, Minwoo Kim, David Helmy, Ying Sun, Marc G. Genton
{"title":"MPCR: Multi- and Mixed-Precision Computations Package in R","authors":"Mary Lai O. Salvana, Sameh Abdulah, Minwoo Kim, David Helmy, Ying Sun, Marc G. Genton","doi":"arxiv-2406.02701","DOIUrl":null,"url":null,"abstract":"Computational statistics has traditionally utilized double-precision (64-bit)\ndata structures and full-precision operations, resulting in\nhigher-than-necessary accuracy for certain applications. Recently, there has\nbeen a growing interest in exploring low-precision options that could reduce\ncomputational complexity while still achieving the required level of accuracy.\nThis trend has been amplified by new hardware such as NVIDIA's Tensor Cores in\ntheir V100, A100, and H100 GPUs, which are optimized for mixed-precision\ncomputations, Intel CPUs with Deep Learning (DL) boost, Google Tensor\nProcessing Units (TPUs), Field Programmable Gate Arrays (FPGAs), ARM CPUs, and\nothers. However, using lower precision may introduce numerical instabilities\nand accuracy issues. Nevertheless, some applications have shown robustness to\nlow-precision computations, leading to new multi- and mixed-precision\nalgorithms that balance accuracy and computational cost. To address this need,\nwe introduce MPCR, a novel R package that supports three different precision\ntypes (16-, 32-, and 64-bit) and their combinations, along with its usage in\ncommonly-used Frequentist/Bayesian statistical examples. The MPCR package is\nwritten in C++ and integrated into R through the \\pkg{Rcpp} package, enabling\nhighly optimized operations in various precisions.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - STAT - Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2406.02701","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Computational statistics has traditionally utilized double-precision (64-bit)
data structures and full-precision operations, resulting in
higher-than-necessary accuracy for certain applications. Recently, there has
been a growing interest in exploring low-precision options that could reduce
computational complexity while still achieving the required level of accuracy.
This trend has been amplified by new hardware such as NVIDIA's Tensor Cores in
their V100, A100, and H100 GPUs, which are optimized for mixed-precision
computations, Intel CPUs with Deep Learning (DL) boost, Google Tensor
Processing Units (TPUs), Field Programmable Gate Arrays (FPGAs), ARM CPUs, and
others. However, using lower precision may introduce numerical instabilities
and accuracy issues. Nevertheless, some applications have shown robustness to
low-precision computations, leading to new multi- and mixed-precision
algorithms that balance accuracy and computational cost. To address this need,
we introduce MPCR, a novel R package that supports three different precision
types (16-, 32-, and 64-bit) and their combinations, along with its usage in
commonly-used Frequentist/Bayesian statistical examples. The MPCR package is
written in C++ and integrated into R through the \pkg{Rcpp} package, enabling
highly optimized operations in various precisions.