{"title":"共轭梯度法在线性代数问题中的应用","authors":"J. Nash","doi":"10.1201/9781315139784-19","DOIUrl":null,"url":null,"abstract":"This monograph concludes by applying the conjugate gradients method, developed in chapter 16 for the minimisation of nonlinear functions, to linear equations, linear least-squares and algebraic eigenvalue problems. The methods suggested may not be the most efficient or effective of their type, since this subject area has not attracted a great deal of careful research. In fact much of the work which has been performed on the sparse algebraic eigenvalue problem has been carried out by those scientists and engineers in search of solutions. Stewart (1976) has prepared an extensive bibliography on the large, sparse, generalised symmetric matrix eigenvalue problem in which it is unfortunately difficult to find many reports that do more than describe a method. Thorough or even perfunctory testing is often omitted and convergence is rarely demonstrated, let alone proved. The work of Professor Axe1 Ruhe and his co-workers at Umea is a notable exception to this generality. Partly, the lack of testing is due to the sheer size of the matrices that may be involved in real problems and the cost of finding eigensolutions. The linear equations and least-squares problems have enjoyed a more diligent study. A number of studies have been made of the conjugate gradients method for linear-equation systems with positive definite coefficient matrices, of which one is that of Reid (1971). Related methods have been developed in particular by Paige and Saunders (1975) who relate the conjugate gradients methods to the Lanczos algorithm for the algebraic eigenproblem. The Lanczos algorithm has been left out of this work because I feel it to be a tool best used by someone prepared to tolerate its quirks. This sentiment accords with the statement of Kahan and Parlett (1976):‘The urge to write a universal Lanczos program should be resisted, at least until the process is better understood.’ However, in the hands of an expert, it is a very powerful method for finding the eigenvalues of a large symmetric matrix. For indefinite coefficient matrices, however, I would expect the Paige-Saunders method to be preferred, by virtue of its design. In preparing the first edition of this book, I experimented briefly with some FORTRAN codes for several methods for iterative solution of linear equations and least-squares problems, finding no clear advantage for any one approach, though I did not focus on indefinite matrices. Therefore, the treatment which follows will stay with conjugate gradients, which has the advantage of introducing no fundamentally new ideas. It must be pointed out that the general-purpose minimisation algorithm 22 does not perform very well on linear least-squares or Rayleigh quotient minimisations.","PeriodicalId":345605,"journal":{"name":"Compact Numerical Methods for Computers","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Conjugate Gradients Method Applied to Problems in Linear Algebra\",\"authors\":\"J. Nash\",\"doi\":\"10.1201/9781315139784-19\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This monograph concludes by applying the conjugate gradients method, developed in chapter 16 for the minimisation of nonlinear functions, to linear equations, linear least-squares and algebraic eigenvalue problems. The methods suggested may not be the most efficient or effective of their type, since this subject area has not attracted a great deal of careful research. In fact much of the work which has been performed on the sparse algebraic eigenvalue problem has been carried out by those scientists and engineers in search of solutions. Stewart (1976) has prepared an extensive bibliography on the large, sparse, generalised symmetric matrix eigenvalue problem in which it is unfortunately difficult to find many reports that do more than describe a method. Thorough or even perfunctory testing is often omitted and convergence is rarely demonstrated, let alone proved. The work of Professor Axe1 Ruhe and his co-workers at Umea is a notable exception to this generality. Partly, the lack of testing is due to the sheer size of the matrices that may be involved in real problems and the cost of finding eigensolutions. The linear equations and least-squares problems have enjoyed a more diligent study. A number of studies have been made of the conjugate gradients method for linear-equation systems with positive definite coefficient matrices, of which one is that of Reid (1971). Related methods have been developed in particular by Paige and Saunders (1975) who relate the conjugate gradients methods to the Lanczos algorithm for the algebraic eigenproblem. The Lanczos algorithm has been left out of this work because I feel it to be a tool best used by someone prepared to tolerate its quirks. This sentiment accords with the statement of Kahan and Parlett (1976):‘The urge to write a universal Lanczos program should be resisted, at least until the process is better understood.’ However, in the hands of an expert, it is a very powerful method for finding the eigenvalues of a large symmetric matrix. For indefinite coefficient matrices, however, I would expect the Paige-Saunders method to be preferred, by virtue of its design. In preparing the first edition of this book, I experimented briefly with some FORTRAN codes for several methods for iterative solution of linear equations and least-squares problems, finding no clear advantage for any one approach, though I did not focus on indefinite matrices. Therefore, the treatment which follows will stay with conjugate gradients, which has the advantage of introducing no fundamentally new ideas. It must be pointed out that the general-purpose minimisation algorithm 22 does not perform very well on linear least-squares or Rayleigh quotient minimisations.\",\"PeriodicalId\":345605,\"journal\":{\"name\":\"Compact Numerical Methods for Computers\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-12-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Compact Numerical Methods for Computers\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1201/9781315139784-19\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Compact Numerical Methods for Computers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1201/9781315139784-19","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The Conjugate Gradients Method Applied to Problems in Linear Algebra
This monograph concludes by applying the conjugate gradients method, developed in chapter 16 for the minimisation of nonlinear functions, to linear equations, linear least-squares and algebraic eigenvalue problems. The methods suggested may not be the most efficient or effective of their type, since this subject area has not attracted a great deal of careful research. In fact much of the work which has been performed on the sparse algebraic eigenvalue problem has been carried out by those scientists and engineers in search of solutions. Stewart (1976) has prepared an extensive bibliography on the large, sparse, generalised symmetric matrix eigenvalue problem in which it is unfortunately difficult to find many reports that do more than describe a method. Thorough or even perfunctory testing is often omitted and convergence is rarely demonstrated, let alone proved. The work of Professor Axe1 Ruhe and his co-workers at Umea is a notable exception to this generality. Partly, the lack of testing is due to the sheer size of the matrices that may be involved in real problems and the cost of finding eigensolutions. The linear equations and least-squares problems have enjoyed a more diligent study. A number of studies have been made of the conjugate gradients method for linear-equation systems with positive definite coefficient matrices, of which one is that of Reid (1971). Related methods have been developed in particular by Paige and Saunders (1975) who relate the conjugate gradients methods to the Lanczos algorithm for the algebraic eigenproblem. The Lanczos algorithm has been left out of this work because I feel it to be a tool best used by someone prepared to tolerate its quirks. This sentiment accords with the statement of Kahan and Parlett (1976):‘The urge to write a universal Lanczos program should be resisted, at least until the process is better understood.’ However, in the hands of an expert, it is a very powerful method for finding the eigenvalues of a large symmetric matrix. For indefinite coefficient matrices, however, I would expect the Paige-Saunders method to be preferred, by virtue of its design. In preparing the first edition of this book, I experimented briefly with some FORTRAN codes for several methods for iterative solution of linear equations and least-squares problems, finding no clear advantage for any one approach, though I did not focus on indefinite matrices. Therefore, the treatment which follows will stay with conjugate gradients, which has the advantage of introducing no fundamentally new ideas. It must be pointed out that the general-purpose minimisation algorithm 22 does not perform very well on linear least-squares or Rayleigh quotient minimisations.