{"title":"Lyapunov方程的低秩修正Galerkin方法","authors":"Kathryn Lund, Davide Palitta","doi":"arxiv-2312.00463","DOIUrl":null,"url":null,"abstract":"Of all the possible projection methods for solving large-scale Lyapunov\nmatrix equations, Galerkin approaches remain much more popular than\nPetrov-Galerkin ones. This is mainly due to the different nature of the\nprojected problems stemming from these two families of methods. While a\nGalerkin approach leads to the solution of a low-dimensional matrix equation\nper iteration, a matrix least-squares problem needs to be solved per iteration\nin a Petrov-Galerkin setting. The significant computational cost of these\nleast-squares problems has steered researchers towards Galerkin methods in\nspite of the appealing minimization properties of Petrov-Galerkin schemes. In\nthis paper we introduce a framework that allows for modifying the Galerkin\napproach by low-rank, additive corrections to the projected matrix equation\nproblem with the two-fold goal of attaining monotonic convergence rates similar\nto those of Petrov-Galerkin schemes while maintaining essentially the same\ncomputational cost of the original Galerkin method. We analyze the\nwell-posedness of our framework and determine possible scenarios where we\nexpect the residual norm attained by two low-rank-modified variants to behave\nsimilarly to the one computed by a Petrov-Galerkin technique. A panel of\ndiverse numerical examples shows the behavior and potential of our new\napproach.","PeriodicalId":501061,"journal":{"name":"arXiv - CS - Numerical Analysis","volume":"40 3","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Low-rank-modified Galerkin methods for the Lyapunov equation\",\"authors\":\"Kathryn Lund, Davide Palitta\",\"doi\":\"arxiv-2312.00463\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Of all the possible projection methods for solving large-scale Lyapunov\\nmatrix equations, Galerkin approaches remain much more popular than\\nPetrov-Galerkin ones. This is mainly due to the different nature of the\\nprojected problems stemming from these two families of methods. While a\\nGalerkin approach leads to the solution of a low-dimensional matrix equation\\nper iteration, a matrix least-squares problem needs to be solved per iteration\\nin a Petrov-Galerkin setting. The significant computational cost of these\\nleast-squares problems has steered researchers towards Galerkin methods in\\nspite of the appealing minimization properties of Petrov-Galerkin schemes. In\\nthis paper we introduce a framework that allows for modifying the Galerkin\\napproach by low-rank, additive corrections to the projected matrix equation\\nproblem with the two-fold goal of attaining monotonic convergence rates similar\\nto those of Petrov-Galerkin schemes while maintaining essentially the same\\ncomputational cost of the original Galerkin method. We analyze the\\nwell-posedness of our framework and determine possible scenarios where we\\nexpect the residual norm attained by two low-rank-modified variants to behave\\nsimilarly to the one computed by a Petrov-Galerkin technique. A panel of\\ndiverse numerical examples shows the behavior and potential of our new\\napproach.\",\"PeriodicalId\":501061,\"journal\":{\"name\":\"arXiv - CS - Numerical Analysis\",\"volume\":\"40 3\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Numerical Analysis\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2312.00463\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Numerical Analysis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2312.00463","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Low-rank-modified Galerkin methods for the Lyapunov equation
Of all the possible projection methods for solving large-scale Lyapunov
matrix equations, Galerkin approaches remain much more popular than
Petrov-Galerkin ones. This is mainly due to the different nature of the
projected problems stemming from these two families of methods. While a
Galerkin approach leads to the solution of a low-dimensional matrix equation
per iteration, a matrix least-squares problem needs to be solved per iteration
in a Petrov-Galerkin setting. The significant computational cost of these
least-squares problems has steered researchers towards Galerkin methods in
spite of the appealing minimization properties of Petrov-Galerkin schemes. In
this paper we introduce a framework that allows for modifying the Galerkin
approach by low-rank, additive corrections to the projected matrix equation
problem with the two-fold goal of attaining monotonic convergence rates similar
to those of Petrov-Galerkin schemes while maintaining essentially the same
computational cost of the original Galerkin method. We analyze the
well-posedness of our framework and determine possible scenarios where we
expect the residual norm attained by two low-rank-modified variants to behave
similarly to the one computed by a Petrov-Galerkin technique. A panel of
diverse numerical examples shows the behavior and potential of our new
approach.