{"title":"改变你的s步GMRES中的s","authors":"David Imberti, J. Erhel","doi":"10.1553/ETNA_VOL47S206","DOIUrl":null,"url":null,"abstract":"Krylov subspace methods are commonly used iterative methods for solving large sparse linear systems, however they suffer from communication bottlenecks on parallel computers. Therefore, $s$-step methods have been developed where the Krylov subspace is built block by block, so that $s$ matrix-vector multiplications can be done before orthonormalizing the block. Then Communication-Avoiding algorithms can be used for both kernels. This paper introduces a new variation on $s$-step GMRES in order to reduce the number of iterations necessary to ensure convergence, with a small overhead in the number of communications. Namely, we develop a $s$-step GMRES algorithm, where the block size is variable and increases gradually. Our numerical experiments show a good agreement with our analysis of condition numbers and demonstrate the efficiency of our variable $s$-step approach.","PeriodicalId":50536,"journal":{"name":"Electronic Transactions on Numerical Analysis","volume":null,"pages":null},"PeriodicalIF":0.8000,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Varying the s in Your s-step GMRES\",\"authors\":\"David Imberti, J. Erhel\",\"doi\":\"10.1553/ETNA_VOL47S206\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Krylov subspace methods are commonly used iterative methods for solving large sparse linear systems, however they suffer from communication bottlenecks on parallel computers. Therefore, $s$-step methods have been developed where the Krylov subspace is built block by block, so that $s$ matrix-vector multiplications can be done before orthonormalizing the block. Then Communication-Avoiding algorithms can be used for both kernels. This paper introduces a new variation on $s$-step GMRES in order to reduce the number of iterations necessary to ensure convergence, with a small overhead in the number of communications. Namely, we develop a $s$-step GMRES algorithm, where the block size is variable and increases gradually. Our numerical experiments show a good agreement with our analysis of condition numbers and demonstrate the efficiency of our variable $s$-step approach.\",\"PeriodicalId\":50536,\"journal\":{\"name\":\"Electronic Transactions on Numerical Analysis\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.8000,\"publicationDate\":\"2018-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Electronic Transactions on Numerical Analysis\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1553/ETNA_VOL47S206\",\"RegionNum\":4,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Electronic Transactions on Numerical Analysis","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1553/ETNA_VOL47S206","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
Krylov subspace methods are commonly used iterative methods for solving large sparse linear systems, however they suffer from communication bottlenecks on parallel computers. Therefore, $s$-step methods have been developed where the Krylov subspace is built block by block, so that $s$ matrix-vector multiplications can be done before orthonormalizing the block. Then Communication-Avoiding algorithms can be used for both kernels. This paper introduces a new variation on $s$-step GMRES in order to reduce the number of iterations necessary to ensure convergence, with a small overhead in the number of communications. Namely, we develop a $s$-step GMRES algorithm, where the block size is variable and increases gradually. Our numerical experiments show a good agreement with our analysis of condition numbers and demonstrate the efficiency of our variable $s$-step approach.
期刊介绍:
Electronic Transactions on Numerical Analysis (ETNA) is an electronic journal for the publication of significant new developments in numerical analysis and scientific computing. Papers of the highest quality that deal with the analysis of algorithms for the solution of continuous models and numerical linear algebra are appropriate for ETNA, as are papers of similar quality that discuss implementation and performance of such algorithms. New algorithms for current or new computer architectures are appropriate provided that they are numerically sound. However, the focus of the publication should be on the algorithm rather than on the architecture. The journal is published by the Kent State University Library in conjunction with the Institute of Computational Mathematics at Kent State University, and in cooperation with the Johann Radon Institute for Computational and Applied Mathematics of the Austrian Academy of Sciences (RICAM).