Yifan Chen, Ethan N. Epperly, Joel A. Tropp, Robert J. Webber
{"title":"随机中心choolesky:核矩阵的实用逼近","authors":"Yifan Chen, Ethan N. Epperly, Joel A. Tropp, Robert J. Webber","doi":"10.1002/cpa.22234","DOIUrl":null,"url":null,"abstract":"The randomly pivoted Cholesky algorithm (<jats:sc>RPCholesky</jats:sc>) computes a factorized rank‐ approximation of an positive‐semidefinite (psd) matrix. <jats:sc>RPCholesky</jats:sc> requires only entry evaluations and additional arithmetic operations, and it can be implemented with just a few lines of code. The method is particularly useful for approximating a kernel matrix. This paper offers a thorough new investigation of the empirical and theoretical behavior of this fundamental algorithm. For matrix approximation problems that arise in scientific machine learning, experiments show that <jats:sc>RPCholesky</jats:sc> matches or beats the performance of alternative algorithms. Moreover, <jats:sc>RPCholesky</jats:sc> provably returns low‐rank approximations that are nearly optimal. The simplicity, effectiveness, and robustness of <jats:sc>RPCholesky</jats:sc> strongly support its use in scientific computing and machine learning applications.","PeriodicalId":10601,"journal":{"name":"Communications on Pure and Applied Mathematics","volume":"27 1","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Randomly pivoted Cholesky: Practical approximation of a kernel matrix with few entry evaluations\",\"authors\":\"Yifan Chen, Ethan N. Epperly, Joel A. Tropp, Robert J. Webber\",\"doi\":\"10.1002/cpa.22234\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The randomly pivoted Cholesky algorithm (<jats:sc>RPCholesky</jats:sc>) computes a factorized rank‐ approximation of an positive‐semidefinite (psd) matrix. <jats:sc>RPCholesky</jats:sc> requires only entry evaluations and additional arithmetic operations, and it can be implemented with just a few lines of code. The method is particularly useful for approximating a kernel matrix. This paper offers a thorough new investigation of the empirical and theoretical behavior of this fundamental algorithm. For matrix approximation problems that arise in scientific machine learning, experiments show that <jats:sc>RPCholesky</jats:sc> matches or beats the performance of alternative algorithms. Moreover, <jats:sc>RPCholesky</jats:sc> provably returns low‐rank approximations that are nearly optimal. The simplicity, effectiveness, and robustness of <jats:sc>RPCholesky</jats:sc> strongly support its use in scientific computing and machine learning applications.\",\"PeriodicalId\":10601,\"journal\":{\"name\":\"Communications on Pure and Applied Mathematics\",\"volume\":\"27 1\",\"pages\":\"\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-12-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Communications on Pure and Applied Mathematics\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1002/cpa.22234\",\"RegionNum\":1,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Communications on Pure and Applied Mathematics","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1002/cpa.22234","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS","Score":null,"Total":0}
Randomly pivoted Cholesky: Practical approximation of a kernel matrix with few entry evaluations
The randomly pivoted Cholesky algorithm (RPCholesky) computes a factorized rank‐ approximation of an positive‐semidefinite (psd) matrix. RPCholesky requires only entry evaluations and additional arithmetic operations, and it can be implemented with just a few lines of code. The method is particularly useful for approximating a kernel matrix. This paper offers a thorough new investigation of the empirical and theoretical behavior of this fundamental algorithm. For matrix approximation problems that arise in scientific machine learning, experiments show that RPCholesky matches or beats the performance of alternative algorithms. Moreover, RPCholesky provably returns low‐rank approximations that are nearly optimal. The simplicity, effectiveness, and robustness of RPCholesky strongly support its use in scientific computing and machine learning applications.