{"title":"A Skew-Symmetric Lanczos Bidiagonalization Method for Computing Several Extremal Eigenpairs of a Large Skew-Symmetric Matrix","authors":"Jinzhi Huang, Zhongxiao Jia","doi":"10.1137/23m1553029","DOIUrl":"https://doi.org/10.1137/23m1553029","url":null,"abstract":"SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 2, Page 1114-1147, June 2024. <br/> Abstract. The spectral decomposition of a real skew-symmetric matrix is shown to be equivalent to a specific structured singular value decomposition (SVD) of the matrix. Based on such equivalence, we propose a skew-symmetric Lanczos bidiagonalization (SSLBD) method to compute extremal singular values and the corresponding singular vectors of the matrix, from which its extremal conjugate eigenpairs are recovered pairwise in real arithmetic. A number of convergence results on the method are established, and accuracy estimates for approximate singular triplets are given. In finite precision arithmetic, it is proven that the semi-orthogonality of each set of the computed left and right Lanczos basis vectors and the semi-biorthogonality of two sets of basis vectors are needed to compute the singular values accurately and to make the method work as if it does in exact arithmetic. A commonly used efficient partial reorthogonalization strategy is adapted to maintain the desired semi-orthogonality and semi-biorthogonality. For practical purpose, an implicitly restarted SSLBD algorithm is developed with partial reorthogonalization. Numerical experiments illustrate the effectiveness and overall efficiency of the algorithm.","PeriodicalId":49538,"journal":{"name":"SIAM Journal on Matrix Analysis and Applications","volume":"27 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141257265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cyrus Mostajeran, Nathaël Da Costa, Graham Van Goffrier, Rodolphe Sepulchre
{"title":"Differential Geometry with Extreme Eigenvalues in the Positive Semidefinite Cone","authors":"Cyrus Mostajeran, Nathaël Da Costa, Graham Van Goffrier, Rodolphe Sepulchre","doi":"10.1137/23m1563906","DOIUrl":"https://doi.org/10.1137/23m1563906","url":null,"abstract":"SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 2, Page 1089-1113, June 2024. <br/> Abstract. Differential geometric approaches to the analysis and processing of data in the form of symmetric positive definite (SPD) matrices have had notable successful applications to numerous fields, including computer vision, medical imaging, and machine learning. The dominant geometric paradigm for such applications has consisted of a few Riemannian geometries associated with spectral computations that are costly at high scale and in high dimensions. We present a route to a scalable geometric framework for the analysis and processing of SPD-valued data based on the efficient computation of extreme generalized eigenvalues through the Hilbert and Thompson geometries of the semidefinite cone. We explore a particular geodesic space structure based on Thompson geometry in detail and establish several properties associated with this structure. Furthermore, we define a novel inductive mean of SPD matrices based on this geometry and prove its existence and uniqueness for a given finite collection of points. Finally, we state and prove a number of desirable properties that are satisfied by this mean.","PeriodicalId":49538,"journal":{"name":"SIAM Journal on Matrix Analysis and Applications","volume":"66 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141551311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Riemannian Preconditioned Coordinate Descent for Low Multilinear Rank Approximation","authors":"Mohammad Hamed, Reshad Hosseini","doi":"10.1137/21m1463896","DOIUrl":"https://doi.org/10.1137/21m1463896","url":null,"abstract":"SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 2, Page 1054-1075, June 2024. <br/> Abstract. This paper presents a memory-efficient, first-order method for low multilinear rank approximation of high-order, high-dimensional tensors. In our method, we exploit the second-order information of the cost function and the constraints to suggest a new Riemannian metric on the Grassmann manifold. We use a Riemmanian coordinate descent method for solving the problem and also provide a global convergence analysis matching that of the coordinate descent method in the Euclidean setting. We also show that each step of our method with the unit step size is actually a step of the orthogonal iteration algorithm. Experimental results show the computational advantage of our method for high-dimensional tensors.","PeriodicalId":49538,"journal":{"name":"SIAM Journal on Matrix Analysis and Applications","volume":"25 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141152353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Parameterized Interpolation of Passive Systems","authors":"Peter Benner, Pawan Goyal, Paul Van Dooren","doi":"10.1137/23m1580528","DOIUrl":"https://doi.org/10.1137/23m1580528","url":null,"abstract":"SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 2, Page 1035-1053, June 2024. <br/> Abstract. We study the tangential interpolation problem for a passive transfer function in standard state-space form. We derive new interpolation conditions based on the computation of a deflating subspace associated with a selection of spectral zeros of a parameterized para-Hermitian transfer function. We show that this technique improves the robustness of the low order model and that it can also be applied to nonpassive systems, provided they have sufficiently many spectral zeros in the open right half-plane. We analyze the accuracy needed for the computation of the deflating subspace, in order to still have a passive lower order model and we derive a novel selection procedure of spectral zeros in order to obtain low order models with a small approximation error.","PeriodicalId":49538,"journal":{"name":"SIAM Journal on Matrix Analysis and Applications","volume":"200 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141152360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ivo Dravins, Stefano Serra-Capizzano, Maya Neytcheva
{"title":"Spectral Analysis of Preconditioned Matrices Arising from Stage-Parallel Implicit Runge–Kutta Methods of Arbitrarily High Order","authors":"Ivo Dravins, Stefano Serra-Capizzano, Maya Neytcheva","doi":"10.1137/23m1552498","DOIUrl":"https://doi.org/10.1137/23m1552498","url":null,"abstract":"SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 2, Page 1007-1034, June 2024. <br/> Abstract. The use of high order fully implicit Runge–Kutta methods is of significant importance in the context of the numerical solution of transient partial differential equations, in particular when solving large scale problems due to fine space resolution with many millions of spatial degrees of freedom and long time intervals. In this study we consider strongly [math]-stable implicit Runge–Kutta methods of arbitrary order of accuracy, based on Radau quadratures, for which efficient preconditioners have been introduced. A refined spectral analysis of the corresponding matrices and matrix sequences is presented, both in terms of localization and asymptotic global distribution of the eigenvalues. Specific expressions of the eigenvectors are also obtained. The given study fully agrees with the numerically observed spectral behavior and substantially improves the theoretical studies done in this direction so far. Concluding remarks and open problems end the current work, with specific attention to the potential generalizations of the hereby suggested general approach.","PeriodicalId":49538,"journal":{"name":"SIAM Journal on Matrix Analysis and Applications","volume":"42 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140931836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
El Houcine Bergou, Soumia Boucherouite, Aritra Dutta, Xin Li, Anna Ma
{"title":"A Note on the Randomized Kaczmarz Algorithm for Solving Doubly Noisy Linear Systems","authors":"El Houcine Bergou, Soumia Boucherouite, Aritra Dutta, Xin Li, Anna Ma","doi":"10.1137/23m155712x","DOIUrl":"https://doi.org/10.1137/23m155712x","url":null,"abstract":"SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 2, Page 992-1006, June 2024. <br/> Abstract. Large-scale linear systems, [math], frequently arise in practice and demand effective iterative solvers. Often, these systems are noisy due to operational errors or faulty data-collection processes. In the past decade, the randomized Kaczmarz algorithm (RK) has been studied extensively as an efficient iterative solver for such systems. However, the convergence study of RK in the noisy regime is limited and considers measurement noise in the right-hand side vector, [math]. Unfortunately, in practice, that is not always the case; the coefficient matrix [math] can also be noisy. In this paper, we analyze the convergence of RK for doubly noisy linear systems, i.e., when the coefficient matrix, [math], has additive or multiplicative noise, and [math] is also noisy. In our analyses, the quantity [math] influences the convergence of RK, where [math] represents a noisy version of [math]. We claim that our analysis is robust and realistically applicable, as we do not require information about the noiseless coefficient matrix, [math], and by considering different conditions on noise, we can control the convergence of RK. We perform numerical experiments to substantiate our theoretical findings.","PeriodicalId":49538,"journal":{"name":"SIAM Journal on Matrix Analysis and Applications","volume":"148 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Some New Results on the Maximum Growth Factor in Gaussian Elimination","authors":"Alan Edelman, John Urschel","doi":"10.1137/23m1571903","DOIUrl":"https://doi.org/10.1137/23m1571903","url":null,"abstract":"SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 2, Page 967-991, June 2024. <br/> Abstract. This paper combines modern numerical computation with theoretical results to improve our understanding of the growth factor problem for Gaussian elimination. On the computational side we obtain lower bounds for the maximum growth for complete pivoting for [math] and [math] using the Julia JuMP optimization package. At [math] we obtain a growth factor bigger than [math]. The numerical evidence suggests that the maximum growth factor is bigger than [math] if and only if [math]. We also present a number of theoretical results. We show that the maximum growth factor over matrices with entries restricted to a subset of the reals is nearly equal to the maximum growth factor over all real matrices. We also show that the growth factors under floating point arithmetic and exact arithmetic are nearly identical. Finally, through numerical search, and stability and extrapolation results, we provide improved lower bounds for the maximum growth factor. Specifically, we find that the largest growth factor is bigger than [math] for [math], and the lim sup of the ratio with [math] is greater than or equal to [math]. In contrast to the old conjecture that growth might never be bigger than [math], it seems likely that the maximum growth divided by [math] goes to infinity as [math].","PeriodicalId":49538,"journal":{"name":"SIAM Journal on Matrix Analysis and Applications","volume":"65 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140800244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Conditioning of Matrix Functions at Quasi-Triangular Matrices","authors":"Awad H. Al-Mohy","doi":"10.1137/22m1543689","DOIUrl":"https://doi.org/10.1137/22m1543689","url":null,"abstract":"SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 2, Page 954-966, June 2024. <br/>Abstract. The area of matrix functions has received growing interest for a long period of time due to their growing applications. Having a numerical algorithm for a matrix function, the ideal situation is to have an estimate or bound for the error returned alongside the solution. Condition numbers serve this purpose; they measure the first-order sensitivity of matrix functions to perturbations in the input data. We have observed that the existing unstructured condition number leads most of the time to inferior bounds of relative forward errors for some matrix functions at triangular and quasi-triangular matrices. We propose a condition number of matrix functions exploiting the structure of triangular and quasi-triangular matrices. We then adapt an existing algorithm for exact computation of the unstructured condition number to an algorithm for exact evaluation of the structured condition number. Although these algorithms are direct rather than iterative and useful for testing the numerical stability of numerical algorithms, they are less practical for relatively large problems. Therefore, we use an implicit power method approach to estimate the structured condition number. Our numerical experiments show that the structured condition number captures the behavior of the numerical algorithms and provides sharp bounds for the relative forward errors. In addition, the experiment indicates that the power method algorithm is reliable to estimate the structured condition number.","PeriodicalId":49538,"journal":{"name":"SIAM Journal on Matrix Analysis and Applications","volume":"89 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140800267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maike Meier, Yuji Nakatsukasa, Alex Townsend, Marcus Webb
{"title":"Are Sketch-and-Precondition Least Squares Solvers Numerically Stable?","authors":"Maike Meier, Yuji Nakatsukasa, Alex Townsend, Marcus Webb","doi":"10.1137/23m1551973","DOIUrl":"https://doi.org/10.1137/23m1551973","url":null,"abstract":"SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 2, Page 905-929, June 2024. <br/> Abstract. Sketch-and-precondition techniques are efficient and popular for solving large least squares (LS) problems of the form [math] with [math] and [math]. This is where [math] is “sketched” to a smaller matrix [math] with [math] for some constant [math] before an iterative LS solver computes the solution to [math] with a right preconditioner [math], where [math] is constructed from [math]. Prominent sketch-and-precondition LS solvers are Blendenpik and LSRN. We show that the sketch-and-precondition technique in its most commonly used form is not numerically stable for ill-conditioned LS problems. For provable and practical backward stability and optimal residuals, we suggest using an unpreconditioned iterative LS solver on [math] with [math]. Provided the condition number of [math] is smaller than the reciprocal of the unit roundoff, we show that this modification ensures that the computed solution has a backward error comparable to the iterative LS solver applied to a well-conditioned matrix. Using smoothed analysis, we model floating-point rounding errors to argue that our modification is expected to compute a backward stable solution even for arbitrarily ill-conditioned LS problems. Additionally, we provide experimental evidence that using the sketch-and-solve solution as a starting vector in sketch-and-precondition algorithms (as suggested by Rokhlin and Tygert in 2008) should be highly preferred over the zero vector. The initialization often results in much more accurate solutions—albeit not always backward stable ones.","PeriodicalId":49538,"journal":{"name":"SIAM Journal on Matrix Analysis and Applications","volume":"4 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140800378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stochastic [math]th Root Approximation of a Stochastic Matrix: A Riemannian Optimization Approach","authors":"Fabio Durastante, Beatrice Meini","doi":"10.1137/23m1589463","DOIUrl":"https://doi.org/10.1137/23m1589463","url":null,"abstract":"SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 2, Page 875-904, June 2024. <br/> Abstract. We propose two approaches, based on Riemannian optimization for computing a stochastic approximation of the [math]th root of a stochastic matrix [math]. In the first approach, the approximation is found in the Riemannian manifold of positive stochastic matrices. In the second approach, we introduce the Riemannian manifold of positive stochastic matrices sharing with [math] the Perron eigenvector and we compute the approximation of the [math]th root of [math] in such a manifold. This way, differently from the available methods based on constrained optimization, [math] and its [math]th root approximation share the Perron eigenvector. Such a property is relevant, from a modeling point of view, in the embedding problem for Markov chains. The extended numerical experimentation shows that, in the first approach, the Riemannian optimization methods are generally faster and more accurate than the available methods based on constrained optimization. In the second approach, even though the stochastic approximation of the [math]th root is found in a smaller set, the approximation is generally more accurate than the one obtained by standard constrained optimization.","PeriodicalId":49538,"journal":{"name":"SIAM Journal on Matrix Analysis and Applications","volume":"13 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140631166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}