{"title":"随机稀疏Richardson迭代:一个维数无关的稀疏线性解算器","authors":"Jonathan Weare, Robert J. Webber","doi":"10.1002/cpa.70012","DOIUrl":null,"url":null,"abstract":"Recently, a class of algorithms combining classical fixed‐point iterations with repeated random sparsification of approximate solution vectors has been successfully applied to eigenproblems with matrices as large as . So far, a complete mathematical explanation for this success has proven elusive.The family of methods has not yet been extended to the important case of linear system solves. In this paper, we propose a new scheme based on repeated random sparsification that is capable of solving sparse linear systems in arbitrarily high dimensions. We provide a complete mathematical analysis of this new algorithm. Our analysis establishes a faster‐than‐Monte Carlo convergence rate and justifies use of the scheme even when the solution is too large to store as a dense vector.","PeriodicalId":10601,"journal":{"name":"Communications on Pure and Applied Mathematics","volume":"148 1","pages":""},"PeriodicalIF":2.7000,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Randomly sparsified Richardson iteration: A dimension‐independent sparse linear solver\",\"authors\":\"Jonathan Weare, Robert J. Webber\",\"doi\":\"10.1002/cpa.70012\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, a class of algorithms combining classical fixed‐point iterations with repeated random sparsification of approximate solution vectors has been successfully applied to eigenproblems with matrices as large as . So far, a complete mathematical explanation for this success has proven elusive.The family of methods has not yet been extended to the important case of linear system solves. In this paper, we propose a new scheme based on repeated random sparsification that is capable of solving sparse linear systems in arbitrarily high dimensions. We provide a complete mathematical analysis of this new algorithm. Our analysis establishes a faster‐than‐Monte Carlo convergence rate and justifies use of the scheme even when the solution is too large to store as a dense vector.\",\"PeriodicalId\":10601,\"journal\":{\"name\":\"Communications on Pure and Applied Mathematics\",\"volume\":\"148 1\",\"pages\":\"\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2025-09-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Communications on Pure and Applied Mathematics\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1002/cpa.70012\",\"RegionNum\":1,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Communications on Pure and Applied Mathematics","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1002/cpa.70012","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS","Score":null,"Total":0}
Randomly sparsified Richardson iteration: A dimension‐independent sparse linear solver
Recently, a class of algorithms combining classical fixed‐point iterations with repeated random sparsification of approximate solution vectors has been successfully applied to eigenproblems with matrices as large as . So far, a complete mathematical explanation for this success has proven elusive.The family of methods has not yet been extended to the important case of linear system solves. In this paper, we propose a new scheme based on repeated random sparsification that is capable of solving sparse linear systems in arbitrarily high dimensions. We provide a complete mathematical analysis of this new algorithm. Our analysis establishes a faster‐than‐Monte Carlo convergence rate and justifies use of the scheme even when the solution is too large to store as a dense vector.