{"title":"On a shrink-and-expand technique for block eigensolvers","authors":"Yuqi Liu, Yuxin Ma, Meiyue Shao","doi":"arxiv-2409.05572","DOIUrl":null,"url":null,"abstract":"In block eigenvalue algorithms, such as the subspace iteration algorithm and\nthe locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm,\na large block size is often employed to achieve robustness and rapid\nconvergence. However, using a large block size also increases the computational\ncost. Traditionally, the block size is typically reduced after convergence of\nsome eigenpairs, known as deflation. In this work, we propose a\nnon-deflation-based, more aggressive technique, where the block size is\nadjusted dynamically during the algorithm. This technique can be applied to a\nwide range of block eigensolvers, reducing computational cost without\ncompromising convergence speed. We present three adaptive strategies for\nadjusting the block size, and apply them to four well-known eigensolvers as\nexamples. Theoretical analysis and numerical experiments are provided to\nillustrate the efficiency of the proposed technique. In practice, an overall\nacceleration of 20% to 30% is observed.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"11 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - MATH - Numerical Analysis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05572","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In block eigenvalue algorithms, such as the subspace iteration algorithm and
the locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm,
a large block size is often employed to achieve robustness and rapid
convergence. However, using a large block size also increases the computational
cost. Traditionally, the block size is typically reduced after convergence of
some eigenpairs, known as deflation. In this work, we propose a
non-deflation-based, more aggressive technique, where the block size is
adjusted dynamically during the algorithm. This technique can be applied to a
wide range of block eigensolvers, reducing computational cost without
compromising convergence speed. We present three adaptive strategies for
adjusting the block size, and apply them to four well-known eigensolvers as
examples. Theoretical analysis and numerical experiments are provided to
illustrate the efficiency of the proposed technique. In practice, an overall
acceleration of 20% to 30% is observed.