{"title":"Asymptotic Behavior of the Number of Regression Quantile Breakpoints","authors":"PortnoyStephen","doi":"10.5555/3037621.3037630","DOIUrl":"https://doi.org/10.5555/3037621.3037630","url":null,"abstract":"In the general regression model $y_i = x'_i beta + e_i $, for $i = 1, cdots ,n$ and $beta in {bf R}^p $, the “regression quantile” $hat{beta } (theta )$ estimates the coefficients of the li...","PeriodicalId":200176,"journal":{"name":"Siam Journal on Scientific and Statistical Computing","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125525184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multigrid treatment of “Thin” domains","authors":"V. Mikulinksy","doi":"10.1137/0912050","DOIUrl":"https://doi.org/10.1137/0912050","url":null,"abstract":"Several attempts have been made to apply multigrid methods efficiently to problems on “thin”domains. In [J. Ruge and A. Brandt, “A multigrid approach for elasticity problems on 'thin' domains,” in Multigrid Methods: Theory, Applications and Supercomputing, S. F. McCormick, ed., Marcel Dekker, New York, 1988, pp. 541–555] a multigrid method is described that permits overcoming difficulties in the case of elasticity equations on a “thin” domain bounded by straight lines. Here a method is presented that can be used for a wide class of differential problems on “thin” domains with straight or curved boundaries. For this method applied to the equations of elasticity on a domain bounded by straight lines, the computer time and storage are lower, and the programming and extension to three-dimensional cases are easier, than for the method described by Ruge and Brandt.","PeriodicalId":200176,"journal":{"name":"Siam Journal on Scientific and Statistical Computing","volume":"283 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122503958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A time-stepping algorithm for parallel computers","authors":"D. Worley","doi":"10.1137/0911049","DOIUrl":"https://doi.org/10.1137/0911049","url":null,"abstract":"Parabolic and hyperbolic differential equations are often solved numerically by time-stepping algorithms. These algorithms have been regarded as sequential in time; that is, the solution on a time level must be known before the computation of the solution at subsequent time levels can start. While this remains true in principle, it is demonstrated that it is possible for processors to perform useful work on many time levels simultaneously. Specifically, it is possible for processors assigned to “later” time levels to compute a very good initial guess for the solution based on partial solutions from previous time levels, thus reducing the time required for solution. The reduction in the solution time can be measured as parallel speedup.This algorithm is demonstrated for both linear and nonlinear problems. In addition, the convergence properties of the method based on the convergence properties of the underlying iterative method are discussed, and an accurate performance model from which the speedup and oth...","PeriodicalId":200176,"journal":{"name":"Siam Journal on Scientific and Statistical Computing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123621238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"High Performance Preconditioning","authors":"H. A. Vorst","doi":"10.1137/0910071","DOIUrl":"https://doi.org/10.1137/0910071","url":null,"abstract":"The discretization of second-order elliptic partial differential equations over three-dimensional rectangular regions, in general, leads to very large sparse linear systems. Because of their huge order and their sparseness, these systems can only be solved by iterative methods using powerful computers, e.g., vector supercomputers. Most of those methods are only attractive when used in combination with a so-called preconditioning matrix. Unfortunately, the more effective preconditioners, such as successive over-relaxation and incomplete decompositions, do not perform very well on most vector computers if used in a straightforward manner. In this paper it is shown how a rather high performance can be achieved for these preconditioners.","PeriodicalId":200176,"journal":{"name":"Siam Journal on Scientific and Statistical Computing","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121950846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On optimal interpolation triangle incidences","authors":"E. D'Azevedo, R. B. Simpson","doi":"10.1137/0910064","DOIUrl":"https://doi.org/10.1137/0910064","url":null,"abstract":"The problem of determining optimal incidences for triangulating a given set of vertices for the model problem of interpolating a convex quadratic surface by piecewise linear functions is studied. An exact expression for the maximum error is derived, and the optimality criterion is minimization of the maximum error. The optimal incidences are shown to be derivable from an associated Delaunay triangulation and hence are computable in $O(Nlog N)$ time for N vertices.","PeriodicalId":200176,"journal":{"name":"Siam Journal on Scientific and Statistical Computing","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123991754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A fast algorithm for reordering sparse matrices for parallel factorization","authors":"J. G. Lewis, B. Peyton, A. Pothen","doi":"10.1137/0910070","DOIUrl":"https://doi.org/10.1137/0910070","url":null,"abstract":"Jess and Kees [IEEE Trans. Comput., C-31 (1982), pp. 231–239] introduced a method for ordering a sparse symmetric matrix A for efficient parallel factorization. The parallel ordering is computed in two steps. First, the matrix A is ordered by some fill-reducing ordering. Second, a parallel ordering of A is computed from the filled graph that results from symbolically factoring A using the initial fill-reducing ordering. Among all orderings whose fill lies in the filled graph, this parallel ordering achieves the minimum number of parallel steps in the factorization of A. Jess and Kees did not specify the implementation details of an algorithm for either step of this scheme. Liu and Mirzaian [SIAM J. Discrete Math., 2 (1989), pp. 100–107] designed an algorithm implementing the second step, but it has time and space requirements higher than the cost of computing common fill-reducing orderings.A new fast algorithm that implements the parallel ordering step by exploiting the clique tree representation of a chordal graph is presented. The cost of the parallel ordering step is reduced well below that of the fill-reducing step. This algorithm has time and space complexity linear in the number of compressed subscripts for L, i.e., the sum of the sizes of the maximal cliques of the filled graph. Running times nearly identical to Liu's heuristic composite rotations algorithm, which approximates the minimum number of parallel steps, are demonstrated empirically.","PeriodicalId":200176,"journal":{"name":"Siam Journal on Scientific and Statistical Computing","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114350012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Minimum Degree Ordering with Constraints","authors":"Joseph W. H. Liu","doi":"10.1137/0910069","DOIUrl":"https://doi.org/10.1137/0910069","url":null,"abstract":"A hybrid scheme for ordering sparse symmetric matrices is considered. It is based on a combined use of the top-down nested dissection and the bottom-up minimum degree ordering schemes. A separator set is first determined by some form of incomplete nested dissection. The minimum degree ordering is then applied subject to the constraint that the separator nodes must be ordered last. It is shown experimentally that the quality of the resulting ordering from this constrained scheme exhibits less sensitivity to the initial matrix ordering than that of the original minimum degree ordering. An important application of this approach to find orderings suitable for parallel elimination is also illustrated.","PeriodicalId":200176,"journal":{"name":"Siam Journal on Scientific and Statistical Computing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127867764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. P. Young, R. Melvin, F. Johnson, J. Bussoletti, L. Wigton, S. Samant
{"title":"Application of sparse matrix solvers as effective preconditioners","authors":"D. P. Young, R. Melvin, F. Johnson, J. Bussoletti, L. Wigton, S. Samant","doi":"10.1137/0910072","DOIUrl":"https://doi.org/10.1137/0910072","url":null,"abstract":"In this paper the use of a new out-of-core sparse matrix package for the numerical solution of partial differential equations involving complex geometries arising from aerospace applications is discussed. The sparse matrix solver accepts contributions to the matrix elements in random order and assembles the matrix using fast sort/merge routines. Fill-in is reduced through the use of a physically based nested dissection ordering. For very large problems a drop tolerance is used during the matrix decomposition phase. The resulting incomplete factorization is an effective preconditioner for Krylov subspace methods, such as GMRES. Problems involving 200,000 unknowns routinely are solved on the Cray X-MP using 64MW of solid-state storage device (SSD).","PeriodicalId":200176,"journal":{"name":"Siam Journal on Scientific and Statistical Computing","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115627832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the quadratic convergence of the serial singular value decomposition Jacobi methods for triangular matrices","authors":"V. Hari","doi":"10.1137/0910065","DOIUrl":"https://doi.org/10.1137/0910065","url":null,"abstract":"The quadratic convergence of the serial singular value decomposition (SVD) Jacobi methods for triangular matrices is proved. The obtained bounds are as sharp as those obtained by Wilkinson and Van Kempen for the symmetric Jacobi method. Special attention is paid to finding the structure of almost diagonal essentially triangular matrices with multiple singular values.","PeriodicalId":200176,"journal":{"name":"Siam Journal on Scientific and Statistical Computing","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116160934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Krylov Subspace Methods on Supercomputers","authors":"Y. Saad","doi":"10.1137/0910073","DOIUrl":"https://doi.org/10.1137/0910073","url":null,"abstract":"This paper presents a short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Then polynomial preconditioning as an alternative to standard incomplete factorization techniques is discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these ideas and others is given in this article, as well as an attempt to comment on their effectiveness or potential for different types of architectures.","PeriodicalId":200176,"journal":{"name":"Siam Journal on Scientific and Statistical Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129644542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}