{"title":"Accurate bidiagonal decomposition of Lagrange–Vandermonde matrices and applications","authors":"A. Marco, José‐Javier Martínez, Raquel Viaña","doi":"10.1002/nla.2527","DOIUrl":"https://doi.org/10.1002/nla.2527","url":null,"abstract":"Lagrange–Vandermonde matrices are the collocation matrices corresponding to Lagrange‐type bases, obtained by removing the denominators from each element of a Lagrange basis. It is proved that, provided the nodes required to create the Lagrange‐type basis and the corresponding collocation matrix are properly ordered, such matrices are strictly totally positive. A fast algorithm to compute the bidiagonal decomposition of these matrices to high relative accuracy is presented. As an application, the problems of eigenvalue computation, linear system solving and inverse computation are solved in an efficient and accurate way for this type of matrices. Moreover, the proposed algorithms allow to solve fastly and to high relative accuracy some of the cited problems when the involved matrices are collocation matrices corresponding to the standard Lagrange basis, although such collocation matrices are not totally positive. Numerical experiments illustrating the good performance of our approach are also included.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"1 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42782301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
O. Goux, S. Gürol, A. Weaver, Y. Diouane, Oliver Guillet
{"title":"Impact of correlated observation errors on the conditioning of variational data assimilation problems","authors":"O. Goux, S. Gürol, A. Weaver, Y. Diouane, Oliver Guillet","doi":"10.1002/nla.2529","DOIUrl":"https://doi.org/10.1002/nla.2529","url":null,"abstract":"An important class of nonlinear weighted least‐squares problems arises from the assimilation of observations in atmospheric and ocean models. In variational data assimilation, inverse error covariance matrices define the weighting matrices of the least‐squares problem. For observation errors, a diagonal matrix (i.e., uncorrelated errors) is often assumed for simplicity even when observation errors are suspected to be correlated. While accounting for observation‐error correlations should improve the quality of the solution, it also affects the convergence rate of the minimization algorithms used to iterate to the solution. If the minimization process is stopped before reaching full convergence, which is usually the case in operational applications, the solution may be degraded even if the observation‐error correlations are correctly accounted for. In this article, we explore the influence of the observation‐error correlation matrix () on the convergence rate of a preconditioned conjugate gradient (PCG) algorithm applied to a one‐dimensional variational data assimilation (1D‐Var) problem. We design the idealized 1D‐Var system to include two key features used in more complex systems: we use the background error covariance matrix () as a preconditioner (B‐PCG); and we use a diffusion operator to model spatial correlations in and . Analytical and numerical results with the 1D‐Var system show a strong sensitivity of the convergence rate of B‐PCG to the parameters of the diffusion‐based correlation models. Depending on the parameter choices, correlated observation errors can either speed up or slow down the convergence. In practice, a compromise may be required in the parameter specifications of and between staying close to the best available estimates on the one hand and ensuring an adequate convergence rate of the minimization algorithm on the other.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45145878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rank‐structured approximation of some Cauchy matrices with sublinear complexity","authors":"Mikhail Lepilov, J. Xia","doi":"10.1002/nla.2526","DOIUrl":"https://doi.org/10.1002/nla.2526","url":null,"abstract":"In this article, we consider the rank‐structured approximation of one important type of Cauchy matrix. This approximation plays a key role in some structured matrix methods such as stable and efficient direct solvers and other algorithms for Toeplitz matrices and certain kernel matrices. Previous rank‐structured approximations (specifically hierarchically semiseparable, or HSS, approximations) for such a matrix of size cost at least complexity. Here, we show how to construct an HSS approximation with sublinear (specifically, ) complexity. The main ideas include extensive computation reuse and an analytical far‐field compression strategy. Low‐rank compression at each hierarchical level is restricted to just a single off‐diagonal block row, and a resulting basis matrix is then reused for other off‐diagonal block rows as well as off‐diagonal block columns. The relationships among the off‐diagonal blocks are rigorously analyzed. The far‐field compression uses an analytical proxy point method where we optimize the choice of some parameters so as to obtain accurate low‐rank approximations. Both the basis reuse ideas and the resulting analytical hierarchical compression scheme can be generalized to some other kernel matrices and are useful for accelerating relevant rank‐structured approximations (though not subsequent operations like matrix‐vector multiplications).","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42911576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Volume‐based subset selection","authors":"Alexander Osinsky","doi":"10.1002/nla.2525","DOIUrl":"https://doi.org/10.1002/nla.2525","url":null,"abstract":"This paper provides a fast algorithm for the search of a dominant (locally maximum volume) submatrix, generalizing the existing algorithms from n⩽r$$ nleqslant r $$ to n>r$$ n>r $$ submatrix columns, where r$$ r $$ is the number of searched rows. We prove the bound on the number of steps of the algorithm, which allows it to outperform the existing subset selection algorithms in either the bounds on the norm of the pseudoinverse of the found submatrix, or the bounds on the complexity, or both.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46946721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Total positivity and accurate computations with Gram matrices of Said‐Ball bases","authors":"E. Mainar, J. M. Pena, B. Rubio","doi":"10.1002/nla.2521","DOIUrl":"https://doi.org/10.1002/nla.2521","url":null,"abstract":"In this article, it is proved that Gram matrices of totally positive bases of the space of polynomials of a given degree on a compact interval are totally positive. Conditions to guarantee computations to high relative accuracy with those matrices are also obtained. Furthermore, a fast and accurate algorithm to compute the bidiagonal factorization of Gram matrices of the Said‐Ball bases is obtained and used to compute to high relative accuracy their singular values and inverses, as well as the solution of some linear systems associated with these matrices. Numerical examples are included.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47051257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Nonlinear approximation of functions based on nonnegative least squares solver","authors":"Petr N. Vabishchevich","doi":"10.1002/nla.2522","DOIUrl":"https://doi.org/10.1002/nla.2522","url":null,"abstract":"In computational practice, most attention is paid to rational approximations of functions and approximations by the sum of exponents. We consider a wide enough class of nonlinear approximations characterized by a set of two required parameters. The approximating function is linear in the first parameter; these parameters are assumed to be positive. The individual terms of the approximating function represent a fixed function that depends nonlinearly on the second parameter. A numerical approximation minimizes the residual functional by approximating function values at individual points. The second parameter's value is set on a more extensive set of points of the interval of permissible values. The proposed approach's key feature consists in determining the first parameter on each separate iteration of the classical nonnegative least squares method. The computational algorithm is used to rational approximate the function <math altimg=\"urn:x-wiley:nla:media:nla2522:nla2522-math-0001\" display=\"inline\" location=\"graphic/nla2522-math-0001.png\" overflow=\"scroll\">\u0000<semantics>\u0000<mrow>\u0000<msup>\u0000<mrow>\u0000<mi>x</mi>\u0000</mrow>\u0000<mrow>\u0000<mo form=\"prefix\">−</mo>\u0000<mi>α</mi>\u0000</mrow>\u0000</msup>\u0000<mo>,</mo>\u0000<mspace width=\"0.3em\"></mspace>\u0000<mn>0</mn>\u0000<mo><</mo>\u0000<mi>α</mi>\u0000<mo><</mo>\u0000<mn>1</mn>\u0000<mo>,</mo>\u0000<mspace width=\"0.3em\"></mspace>\u0000<mi>x</mi>\u0000<mo>≥</mo>\u0000<mn>1</mn>\u0000</mrow>\u0000$$ {x}^{-alpha },kern0.3em 0<alpha <1,kern0.3em xge 1 $$</annotation>\u0000</semantics></math>. The second example concerns the approximation of the stretching exponential function <math altimg=\"urn:x-wiley:nla:media:nla2522:nla2522-math-0002\" display=\"inline\" location=\"graphic/nla2522-math-0002.png\" overflow=\"scroll\">\u0000<semantics>\u0000<mrow>\u0000<mi>exp</mi>\u0000<mo stretchy=\"false\">(</mo>\u0000<mo form=\"prefix\">−</mo>\u0000<msup>\u0000<mrow>\u0000<mi>x</mi>\u0000</mrow>\u0000<mrow>\u0000<mi>α</mi>\u0000</mrow>\u0000</msup>\u0000<mo stretchy=\"false\">)</mo>\u0000<mo>,</mo>\u0000<mspace width=\"0.0em\"></mspace>\u0000<mspace width=\"0.0em\"></mspace>\u0000<mspace width=\"0.2em\"></mspace>\u0000<mn>0</mn>\u0000<mo><</mo>\u0000<mi>α</mi>\u0000<mo><</mo>\u0000<mn>1</mn>\u0000</mrow>\u0000$$ exp left(-{x}^{alpha}right),0<alpha <1 $$</annotation>\u0000</semantics></math> at <math altimg=\"urn:x-wiley:nla:media:nla2522:nla2522-math-0003\" display=\"inline\" location=\"graphic/nla2522-math-0003.png\" overflow=\"scroll\">\u0000<semantics>\u0000<mrow>\u0000<mi>x</mi>\u0000<mo>≥</mo>\u0000<mn>0</mn>\u0000</mrow>\u0000$$ xge 0 $$</annotation>\u0000</semantics></math> by the sum of exponents.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"147 3","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138502938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data-driven linear complexity low-rank approximation of general kernel matrices: A geometric approach","authors":"Difeng Cai, Edmond Chow, Yuanzhe Xi","doi":"10.1002/nla.2519","DOIUrl":"https://doi.org/10.1002/nla.2519","url":null,"abstract":"A general, <i>rectangular</i> kernel matrix may be defined as <math altimg=\"urn:x-wiley:nla:media:nla2519:nla2519-math-0001\" display=\"inline\" location=\"graphic/nla2519-math-0001.png\" overflow=\"scroll\">\u0000<semantics>\u0000<mrow>\u0000<msub>\u0000<mrow>\u0000<mi>K</mi>\u0000</mrow>\u0000<mrow>\u0000<mi>i</mi>\u0000<mi>j</mi>\u0000</mrow>\u0000</msub>\u0000<mo>=</mo>\u0000<mi>κ</mi>\u0000<mo stretchy=\"false\">(</mo>\u0000<msub>\u0000<mrow>\u0000<mi>x</mi>\u0000</mrow>\u0000<mrow>\u0000<mi>i</mi>\u0000</mrow>\u0000</msub>\u0000<mo>,</mo>\u0000<msub>\u0000<mrow>\u0000<mi>y</mi>\u0000</mrow>\u0000<mrow>\u0000<mi>j</mi>\u0000</mrow>\u0000</msub>\u0000<mo stretchy=\"false\">)</mo>\u0000</mrow>\u0000$$ {K}_{ij}=kappa left({x}_i,{y}_jright) $$</annotation>\u0000</semantics></math> where <math altimg=\"urn:x-wiley:nla:media:nla2519:nla2519-math-0002\" display=\"inline\" location=\"graphic/nla2519-math-0002.png\" overflow=\"scroll\">\u0000<semantics>\u0000<mrow>\u0000<mi>κ</mi>\u0000<mo stretchy=\"false\">(</mo>\u0000<mi>x</mi>\u0000<mo>,</mo>\u0000<mi>y</mi>\u0000<mo stretchy=\"false\">)</mo>\u0000</mrow>\u0000$$ kappa left(x,yright) $$</annotation>\u0000</semantics></math> is a kernel function and where <math altimg=\"urn:x-wiley:nla:media:nla2519:nla2519-math-0003\" display=\"inline\" location=\"graphic/nla2519-math-0003.png\" overflow=\"scroll\">\u0000<semantics>\u0000<mrow>\u0000<mi>X</mi>\u0000<mo>=</mo>\u0000<msubsup>\u0000<mrow>\u0000<mo stretchy=\"false\">{</mo>\u0000<msub>\u0000<mrow>\u0000<mi>x</mi>\u0000</mrow>\u0000<mrow>\u0000<mi>i</mi>\u0000</mrow>\u0000</msub>\u0000<mo stretchy=\"false\">}</mo>\u0000</mrow>\u0000<mrow>\u0000<mi>i</mi>\u0000<mo>=</mo>\u0000<mn>1</mn>\u0000</mrow>\u0000<mrow>\u0000<mi>m</mi>\u0000</mrow>\u0000</msubsup>\u0000</mrow>\u0000$$ X={left{{x}_iright}}_{i=1}^m $$</annotation>\u0000</semantics></math> and <math altimg=\"urn:x-wiley:nla:media:nla2519:nla2519-math-0004\" display=\"inline\" location=\"graphic/nla2519-math-0004.png\" overflow=\"scroll\">\u0000<semantics>\u0000<mrow>\u0000<mi>Y</mi>\u0000<mo>=</mo>\u0000<msubsup>\u0000<mrow>\u0000<mo stretchy=\"false\">{</mo>\u0000<msub>\u0000<mrow>\u0000<mi>y</mi>\u0000</mrow>\u0000<mrow>\u0000<mi>i</mi>\u0000</mrow>\u0000</msub>\u0000<mo stretchy=\"false\">}</mo>\u0000</mrow>\u0000<mrow>\u0000<mi>i</mi>\u0000<mo>=</mo>\u0000<mn>1</mn>\u0000</mrow>\u0000<mrow>\u0000<mi>n</mi>\u0000</mrow>\u0000</msubsup>\u0000</mrow>\u0000$$ Y={left{{y}_iright}}_{i=1}^n $$</annotation>\u0000</semantics></math> are two sets of points. In this paper, we seek a low-rank approximation to a kernel matrix where the sets of points <math altimg=\"urn:x-wiley:nla:media:nla2519:nla2519-math-0005\" display=\"inline\" location=\"graphic/nla2519-math-0005.png\" overflow=\"scroll\">\u0000<semantics>\u0000<mrow>\u0000<mi>X</mi>\u0000</mrow>\u0000$$ X $$</annotation>\u0000</semantics></math> and <math altimg=\"urn:x-wiley:nla:media:nla2519:nla2519-math-0006\" display=\"inline\" location=\"graphic/nla2519-math-0006.png\" overflow=\"scroll\">\u0000<semantics>\u0000<mrow>\u0000<mi>Y</mi>\u0000</mrow>\u0000$$ Y $$</annotation>\u0000</semantics></math> are large and are arbitrarily distributed, such as away from each other, “intermingled”, identical, and so forth. Such rectangular kernel matrices may arise, for example, in Gaussian process regression where <math altimg=\"urn:x-wiley:nla:media:nla2519:nla2519-math-0007\" display=\"inline\" location=\"graphic/nla2519-math-0007.png\" overflow=\"scroll\">\u0000<semantics>\u0000<mrow>\u0000<mi>X</mi>\u0000</mrow>\u0000$$ X $$</annotation>\u0000</semantics></math> corresponds to the training data and <math altimg=\"urn:x-wil","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"148 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138502936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Issue Information","authors":"","doi":"10.1002/nla.2452","DOIUrl":"https://doi.org/10.1002/nla.2452","url":null,"abstract":"","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44500622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Blockwise acceleration of alternating least squares for canonical tensor decomposition","authors":"D. Evans, Nan Ye","doi":"10.1002/nla.2516","DOIUrl":"https://doi.org/10.1002/nla.2516","url":null,"abstract":"The canonical polyadic (CP) decomposition of tensors is one of the most important tensor decompositions. While the well‐known alternating least squares (ALS) algorithm is often considered the workhorse algorithm for computing the CP decomposition, it is known to suffer from slow convergence in many cases and various algorithms have been proposed to accelerate it. In this article, we propose a new accelerated ALS algorithm that accelerates ALS in a blockwise manner using a simple momentum‐based extrapolation technique and a random perturbation technique. Specifically, our algorithm updates one factor matrix (i.e., block) at a time, as in ALS, with each update consisting of a minimization step that directly reduces the reconstruction error, an extrapolation step that moves the factor matrix along the previous update direction, and a random perturbation step for breaking convergence bottlenecks. Our extrapolation strategy takes a simpler form than the state‐of‐the‐art extrapolation strategies and is easier to implement. Our algorithm has negligible computational overheads relative to ALS and is simple to apply. Empirically, our proposed algorithm shows strong performance as compared to the state‐of‐the‐art acceleration techniques on both simulated and real tensors.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49113853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The generalized residual cutting method and its convergence characteristics","authors":"T. Abe, Anthony T. Chronopoulos","doi":"10.1002/nla.2517","DOIUrl":"https://doi.org/10.1002/nla.2517","url":null,"abstract":"Iterative methods and especially Krylov subspace methods (KSM) are a very useful numerical tool in solving for large and sparse linear systems problems arising in science and engineering modeling. More recently, the nested loop KSM have been proposed that improve the convergence of the traditional KSM. In this article, we review the residual cutting (RC) and the generalized residual cutting (GRC) that are nested loop methods for large and sparse linear systems problems. We also show that GRC is a KSM that is equivalent to Orthomin with a variable preconditioning. We use the modified Gram–Schmidt method to derive a stable GRC algorithm. We show that GRC presents a general framework for constructing a class of “hybrid” (nested) KSM based on inner loop method selection. We conduct numerical experiments using nonsymmetric indefinite matrices from a widely used library of sparse matrices that validate the efficiency and the robustness of the proposed methods.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46177245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}